首页|期刊导航|计算机与数字工程|基于对抗性消除偏差的人工智能公平性决策增强算法

基于对抗性消除偏差的人工智能公平性决策增强算法OA

AI Fairness Decision Enhancement Algorithm Based On Adversarial Bias Elimination

中文摘要英文摘要

人工智能决策模型常因训练数据、不同算法或实施中的偏见而面临伦理和法律等挑战,不仅可能违背社会公正和法律规范,也可能限制了模型普适性和质量保障.论文提出了一种多维对抗性消除偏差方法,采用对手学习机制增强了模型的公平性并减少了偏差.实验结果表明采用多维对抗性消除偏见模型在公平性指标上取得了7%~10%的显著改进,均等机会差异和平均奇偶校验差异均有8%~11%的明显减少.论文将对抗性学习应用于消除算法决策中的不公平问题,有效平衡了模型的预测性能与公平性能,为将来实现更精细、更实用的算法公平性指标和标准化的公平性算法提供了良好的理论依据和实际路径.

Artificial intelligence decision models often face ethical and legal challenges due to biases in training data,choice of algorithms,or their implementation,which not only may contravene social justice and legal norms but also may limit the univer-sality and quality assurance of the models.The paper presents a multidimensional adversarial debiasing method that employs adver-sarial learning mechanisms to enhance fairness and reduce biases in the models.Experimental results demonstrates that the multidi-mensional adversarial debiasing model achieves significant improvements of 7%~10%in fairness metrics,with reductions of 8%~11%in both equal opportunity difference and average odds parity difference.This paper applies adversarial learning to eliminate un-fairness in algorithmic decision-making,effectively balancing the model's predictive performance with fairness,and provides a sol-id theoretical foundation and practical pathway for developing more refined and practical fairness metrics and standardized fairness algorithms in the future.

焦婉妮;刘浩锋

武汉数字工程研究所 武汉 430205武汉数字工程研究所 武汉 430205

信息技术与安全科学

对抗性消除偏差技术算法公平性伦理与法律挑战人工智能应用

adversarial debiasing techniquesalgorithmic fairnessethical and legal challengesAI applications

《计算机与数字工程》 2025 (7)

1812-1816,5

10.3969/j.issn.1672-9722.2025.07.005

评论