可信机器学习综述OACHSSCDCSTPCD
A Review of Trustworthy Machine Learning
机器学习技术不断发展,在许多领域都有广泛的应用并展现出超出人类本身的能力.但机器学习方法利用不当或决策存在偏差,反而会损害人们的利益,特别是在一些敏感安全需求高的领域,如金融、医疗等,人们越来越重视机器学习的可信研究.目前,机器学习技术普遍存在一些缺点,如对代表性不足的群体存在偏见、缺乏用户隐私保护、缺乏模型可解释性、容易受到威胁攻击等.这些缺点降低了人们对机器学习方法的信任.尽管研究者已针对这些不足进行了深入探索,但缺乏一个整体的框架与方法系统地提供机器学习的可信分析.因此本文针对机器学习的公平性、可解释性、鲁棒性与隐私 4个要素归纳总结了现阶段主流的定义、指标、方法与评估,然后讨论了各要素之间的关系,并结合机器学习全生命周期构建了一个可信机器学习框架.最后,给出了一些目前可信机器学习领域亟待解决的问题与面临的挑战.
Machine learning technology is continuously evolving and is extensively applied across various domains,demonstrating capabilities beyond human abilities.However,improper use of machine learning methods or biased decision-making can harm human interests,especially in sensitive areas with high-security demand such as finance and healthcare,etc.,leading to an increasing attention on the trustworthiness of machine learning.Currently,machine learning technology commonly exhibits several drawbacks,such as biases against underrepresented groups,lack of user privacy protection,lack of model interpretability,and vulnerability to threats and attacks.These shortcomings undermine human trust in machine learning methods.Although researchers have conducted targeted studies on these issues,there is a lack of a comprehensive framework and methodology to systematically provide trustworthy analysis of machine learning.Therefore,this paper reviews the current mainstream definitions,indicators,methods,and evaluations of fairness,interpretability,robustness,and privacy in machine learning.Then,the relationships among these elements are discussed,while a trustworthy machine learning framework is established by integrating an entire lifecycle of machine learning.Finally,we present some of the current issues and challenges awaiting resolution in the field of trustworthy machine learning.
陈彩华;佘程熙;王庆阳
南京大学 工程管理学院,江苏 南京 210008
经济学
可信机器学习公平性可解释性鲁棒性隐私
trustworthy machine learningfairnessinterpretabilityrobustnessprivacy
《工业工程》 2024 (002)
14-26 / 13
国家自然科学基金优秀青年科学基金资助项目(12122107)
评论