ISSN 0439-755X
CN 11-1911/B
主办:中国心理学会
   中国科学院心理研究所
出版:科学出版社

心理学报 ›› 2026, Vol. 58 ›› Issue (1): 74-95.doi: 10.3724/SP.J.1041.2026.0074 cstr: 32110.14.2026.0074

• 研究报告 • 上一篇    下一篇

人工智能决策的道德缺失效应及其机制与应对策略

胡小勇1, 李穆峰2, 李悦1, 李凯1, 喻丰1()   

  1. 1 武汉大学心理学系, 武汉 430072
    2 西南大学心理学部, 重庆 400715
  • 收稿日期:2025-04-06 发布日期:2025-10-28 出版日期:2026-01-25
  • 通讯作者: 喻丰, Email: psychpedia@whu.edu.cn
  • 基金资助:
    国家社会科学基金西部项目(23XSH003)

Moral deficiency in AI decision-making: Underlying mechanisms and mitigation strategies

HU Xiaoyong1, LI Mufeng2, LI Yue1, LI Kai1, YU Feng1()   

  1. 1 Department of Psychology, Wuhan University, Wuhan 430072, China
    2 Faculty of Psychology, Southwest University, Chongqing 400715, China
  • Received:2025-04-06 Online:2025-10-28 Published:2026-01-25

摘要:

随着人工智能在重大决策中的作用日益凸显, 其引发的道德问题也备受关注。本研究通过整合心智感知理论与道德二元论, 系统揭示了人工智能道德缺失效应的双路径机制及应对策略。研究发现, 人们对人工智能不道德决策的道德反应水平显著弱于人类决策者; 与人类决策者相比, 人们感知到人工智能较低的能动性和体验性是导致人工智能决策道德缺失效应的原因; 对人工智能进行干预的拟人化策略以及对人类进行干预的期望调整策略组合成的综合干预方案能显著提升人们对人工智能的道德反应水平。与其他学科侧重从设计层面探讨公平算法的原则与方法不同, 本研究基于心理学视角, 关注人们在人工智能与人类决策中的心理反应差异。此视角不仅为应对算法偏见引发的社会问题和构建公平算法提供了新的思路, 也为“算法伦理”研究拓展了理论边界。

关键词: 人工智能, 道德缺失效应, 心智感知, 拟人化, 期望调整

Abstract:

As artificial intelligence (AI) assumes an increasingly prominent role in high-stakes decision-making, the ethical challenges it raises have become a pressing concern. This paper systematically investigates the moral deficiency effect in AI decision making by integrating mind perception theory with moral dualism. Through this framework, we identify a dual-path psychological mechanism and propose targeted intervention strategies.

Our first investigation, Study 1, explored the limitations of AI in moral judgment using scenarios rooted in the Chinese socio-cultural context. Across three representative situations—educational, age, and gender discrimination—the moral response scores for AI-generated decisions were significantly lower than for those made by human agents. These findings not only align with existing Western research on AI’s moral judgment deficits but also suggest that the moral deficiency effect is generalizable across cultures.

To understand why this deficiency occurs, Study 2 investigated the underlying psychological mechanisms. Drawing on mind perception theory and moral dualism, we proposed a dual-path mediation model involving perceived agency and perceived experience. We conducted three sub-studies that first tested these two mediators separately and then assessed their combined effects. Using experimental mediation, we provided the first causal evidence of how the decision-maker's identity (AI vs. human) interacts with dimensions of mind perception. Specifically, when participants perceived an AI as having greater agency and experience, their moral approval of its decisions significantly increased—an effect not observed with human decision-makers. Structural equation modeling further confirmed a synergistic effect between the two paths, indicating their combined explanatory power exceeds that of either one alone. This suggests that in the real world, moral responses to AI are influenced simultaneously by both cognitive pathways.

Building on these mechanistic insights, Study 3 tested intervention strategies to mitigate the AI-induced moral deficiency effect. In a double-blind, randomized controlled experiment, we evaluated two approaches: anthropomorphic design and mental expectancy enhancement. Both strategies significantly improved moral responses by increasing participants' perceptions of the AI's agency and experience. Moreover, a combined intervention produced a stronger effect than either strategy did alone. Although these interventions target different elements—one focusing on the AI system and the other on human cognition—they both operate through the shared mechanism of mind perception. By doing so, they effectively enhance moral accountability for an AI's unethical behavior, offering a practical pathway to address moral deficiencies in AI decision-making.

Ultimately, this research provides a novel contribution to the field of “algorithmic ethics.” Unlike traditional approaches that emphasize technical design principles and fairness algorithms, our study adopts a psychological perspective that centers on the human recipient of AI-driven decisions. Practically, we propose actionable intervention strategies grounded in mind perception, while our synergistic model provides a robust framework for AI ethical governance. Collectively, these findings deepen the understanding of moral judgment in AI contexts, guide the development of algorithmic accountability systems, and support the optimization of human− AI collaboration—thereby establishing a critical psychological foundation for the ethical deployment of AI.

Key words: artificial intelligence, moral deficit effect, mind perception, anthropomorphism, expectation adjustment

中图分类号: