ISSN 0439-755X
CN 11-1911/B
主办:中国心理学会
   中国科学院心理研究所
出版:科学出版社

心理学报 ›› 2025, Vol. 57 ›› Issue (11): 2060-2082.doi: 10.3724/SP.J.1041.2025.2060 cstr: 32110.14.2025.2060

• 人工智能心理与治理专刊 • 上一篇    

人工智能主管提出的道德行为建议更少被遵从

许丽颖,#, 赵一骏,#, 喻丰()   

  1. 武汉大学心理学系, 武汉 430072
  • 收稿日期:2024-01-26 发布日期:2025-09-24 出版日期:2025-11-25
  • 通讯作者: 喻丰, E-mail: psychpedia@whu.edu.cn
  • 作者简介:

    #许丽颖和赵一骏为本文共同第一作者。

  • 基金资助:
    国家自然科学基金青年项目(72101132);国家社科基金青年项目(20CZX059)

Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human

XU Liying,#, ZHAO Yijun,#, YU Feng()   

  1. Department of Psychology, Wuhan University, Wuhan 430072, China
  • Received:2024-01-26 Online:2025-09-24 Published:2025-11-25

摘要:

人工智能(Artificial Intelligence, AI)技术的蓬勃发展引起了组织中的巨大变革, 其担当起能够直接影响员工行为的主管角色。6项递进的情境实验(N = 1642)试图探讨人们对由AI或人类主管提出道德行为建议的反应差异, 以及其心理机制和边界条件。结果发现:相比于人类主管, 人们对AI主管提出道德行为建议的遵从程度更低(实验1a~5), 这是因为人们对与AI主管的互动存在更低的评价忧虑(实验2~3), 而且当个体拟人化倾向越强或AI主管越拟人化时, 人们对AI主管提出的道德行为建议越遵从(实验4~5)。研究结果有助于更好地理解人们对组织中AI主管的反应, 并说明了AI主管在涉及道德引导领域的欠缺, 为组织管理中AI领导力的部署提供实践参考和提升方案。

关键词: AI主管, 建议遵从, 道德行为建议, 评价忧虑, 拟人化

Abstract:

The use of artificial intelligence (AI) in organizations has evolved from being a tool to being a supervisor. Although previous research has examined people's reactions to AI supervisors in general, few studies have investigated the effectiveness of AI supervisors, specifically whether individuals adhere to their moral behavioral advices. The present research aims to compare employees' adherence to moral behavioral advice given by AI and human supervisors, as well as identify potential psychological mechanisms and boundary conditions behind the possible differences.

To test our research hypotheses, we conducted six experiments and three pilot experiemts (N = 1642, including 179 samples of pilot experiments) involving different types of moral behaviors in organizations, such as engaging in the activity to help the disabled, volunteering for environmental protection or child welfare, and making charitable donations for disasters or colleagues' difficulties. Experiment 1a and 1b was a single-factor, two-level, between-subjects design. 180 participants were randomly assigned to two conditions: supervisors giving advice on moral behavior (human versus AI). Their adherence to the supervisor's advice was measured in different scenarios. Experiment 2 followed the same design as Experiment 1, with additional measurements of evaluation apprehension and perceived mind to test the mediating role. To establish a causal chain between the mediator and the dependent variable and demonstrate the robustness of our findings, we further examined the underlying mechanism in Experiment 3. This experiment had a between-subjects design of 2 (supervisors: human versus AI) × 2 (evaluation apprehension: high versus low). Experiments 4 and 5 were designed to test the moderating role of anthropomorphism. In Experiment 4, participants' tendency to anthropomorphize was measured, and in Experiment 5, the anthropomorphism of the AI supervisor was manipulated.

As predicted, the present research found that, compared to a human supervisor, participants were less likely to follow the moral advice of an AI supervisor (Experiments 1a~5). The robustness of this finding was demonstrated by the diversity of our scenario settings and samples. And we also excluded the potential effects of perceived rational, negative emotions, exploitation, perceived autonomy and some individual differences (pilot experiment and emperiment 1a~1b). In addition, this research discovered evaluation apprehension as the underlying mechanism explaining employees' adherence to advice from different supervisors. Participants believed that they would receive less social judgment and evaluation from an AI supervisor than a human supervisor. Consequently, they were less willing to adhere to the advice offered by the AI (Experiments 2~5). The present research also demonstrated the moderating effect of anthropomorphism (Experiments 4~5). In Experiment 4, for individuals with a high tendency towards anthropomorphism, there was no significant difference in their adherence to advice on moral behavior from human or AI supervisors; Participants with low anthropomorphism tendency showed greater adherence to a human supervisor than to an AI supervisor. In Experiment 5, participants demonstrated greater adherence to the AI supervisor with a human-like name and communication style compared to the mechanized AI supervisor.

The study contributes to the literature on AI leadership by highlighting the limitations of AI supervisors in providing advice on moral behavior. Additionally, the results confirm the phenomenon of algorithm aversion in the moral domain, indicating that people are hesitant to accept AI involvement in moral decision-making, even in an advisory role. The study also identifies evaluation apprehension as a factor that influences adherence to AI advice. Individuals may be less likely to follow the advice of AI due to a decreased concern for potential social judgment in their interactions with AI supervisors. Finally, anthropomorphism may be a useful approach to enhance the effectiveness of AI supervisors.

Key words: artificial intelligence supervisor, advice adherence, advice on moral behavior, evaluation apprehension, anthropomorphism

中图分类号: