心理学报2026,Vol.58Issue(1):74-95,22.DOI:10.3724/SP.J.1041.2026.0074
人工智能决策的道德缺失效应及其机制与应对策略
Moral deficiency in AI decision-making:Underlying mechanisms and mitigation strategies
摘要
Abstract
As artificial intelligence(AI)assumes an increasingly prominent role in high-stakes decision-making,the ethical challenges it raises have become a pressing concern.This paper systematically investigates the moral deficiency effect in AI decision making by integrating mind perception theory with moral dualism.Through this framework,we identify a dual-path psychological mechanism and propose targeted intervention strategies. Our first investigation,Study 1,explored the limitations of AI in moral judgment using scenarios rooted in the Chinese socio-cultural context.Across three representative situations—educational,age,and gender discrimination—the moral response scores for AI-generated decisions were significantly lower than for those made by human agents.These findings not only align with existing Western research on AI's moral judgment deficits but also suggest that the moral deficiency effect is generalizable across cultures. To understand why this deficiency occurs,Study 2 investigated the underlying psychological mechanisms.Drawing on mind perception theory and moral dualism,we proposed a dual-path mediation model involving perceived agency and perceived experience.We conducted three sub-studies that first tested these two mediators separately and then assessed their combined effects.Using experimental mediation,we provided the first causal evidence of how the decision-maker's identity(AI vs.human)interacts with dimensions of mind perception.Specifically,when participants perceived an AI as having greater agency and experience,their moral approval of its decisions significantly increased—an effect not observed with human decision-makers.Structural equation modeling further confirmed a synergistic effect between the two paths,indicating their combined explanatory power exceeds that of either one alone.This suggests that in the real world,moral responses to AI are influenced simultaneously by both cognitive pathways. Building on these mechanistic insights,Study 3 tested intervention strategies to mitigate the AI-induced moral deficiency effect.In a double-blind,randomized controlled experiment,we evaluated two approaches:anthropomorphic design and mental expectancy enhancement.Both strategies significantly improved moral responses by increasing participants'perceptions of the AI's agency and experience.Moreover,a combined intervention produced a stronger effect than either strategy did alone.Although these interventions target different elements—one focusing on the AI system and the other on human cognition—they both operate through the shared mechanism of mind perception.By doing so,they effectively enhance moral accountability for an AI's unethical behavior,offering a practical pathway to address moral deficiencies in AI decision-making. Ultimately,this research provides a novel contribution to the field of"algorithmic ethics."Unlike traditional approaches that emphasize technical design principles and fairness algorithms,our study adopts a psychological perspective that centers on the human recipient of AI-driven decisions.Practically,we propose actionable intervention strategies grounded in mind perception,while our synergistic model provides a robust framework for AI ethical governance.Collectively,these findings deepen the understanding of moral judgment in AI contexts,guide the development of algorithmic accountability systems,and support the optimization of human-AI collaboration—thereby establishing a critical psychological foundation for the ethical deployment of AI.关键词
人工智能/道德缺失效应/心智感知/拟人化/期望调整Key words
artificial intelligence/moral deficit effect/mind perception/anthropomorphism/expectation adjustment分类
社会科学引用本文复制引用
胡小勇,李穆峰,李悦,李凯,喻丰..人工智能决策的道德缺失效应及其机制与应对策略[J].心理学报,2026,58(1):74-95,22.基金项目
国家社会科学基金西部项目(23XSH003)资助. (23XSH003)