ISSN 0439-755X
CN 11-1911/B

心理学报 ›› 2024, Vol. 56 ›› Issue (4): 497-514.doi: 10.3724/SP.J.1041.2024.00497

• 研究报告 • 上一篇    下一篇


赵一骏, 许丽颖(), 喻丰(), 金旺龙   

  1. 武汉大学心理学系, 武汉 430072
  • 收稿日期:2023-08-22 发布日期:2024-01-17 出版日期:2024-04-25
  • 通讯作者: 许丽颖, E-mali:; 喻丰, E-mail:
  • 基金资助:

Perceived opacity leads to algorithm aversion in the workplace

ZHAO Yijun, XU Liying(), YU Feng(), JIN Wanglong   

  1. Department of Psychology, Wuhan University, Wuhan 430072, China
  • Received:2023-08-22 Online:2024-01-17 Published:2024-04-25


职场中用算法作为人类决策的辅助和替代屡见不鲜, 但人们表现出算法厌恶。本研究通过4个递进实验在不同职场应用场景下比较了人们对于人类决策者与算法决策者所做决策的态度, 并探讨其内在机制和边界条件。结果发现: 在职场情境中, 相比于人类决策者, 人们对算法决策的可容许性、喜爱程度、利用意愿更低, 表现出“算法厌恶”。这一现象的内在心理机制是相比于人类决策, 人们认为算法决策者的决策更加不透明(实验2~3)。进一步研究发现, 当算法被赋予拟人化特征时人们扭转了对算法决策的厌恶, 提高了对其的接纳态度(实验4)。研究结果有助于更好地理解人们对算法决策的反应, 为推动社会治理智能化、引导算法使用伦理化提供启示。

关键词: 算法厌恶, 透明性, 拟人化, 职场


With algorithms standing out and influencing every aspect of human society, people’s attitudes toward algorithmic invasion have become a vital topic to be discussed. Recently, algorithms as alternatives and enhancements to human decision-making have become ubiquitously applied in the workplace. Despite algorithms offering numerous advantages, such as vast data storage and anti-interference performance, previous research has found that people tend to reject algorithmic agents across different applications. Especially in the realm of human resources, the increasing utilization of algorithms forces us to focus on users’ attitudes. Thus, the present study aimed to explore public attitudes toward algorithmic decision-making and probe the underlying mechanism and potential boundary conditions behind the possible difference.

To verify our research hypotheses, four experiments (N = 1211) were conducted, which involved various kinds of human resource decisions in the daily workplace, including resume screening, recruitment and hiring, allocation of bonuses, and performance assessment. Experiment 1 used a single-factor, two-level, between-subjects design. 303 participants were randomly assigned to two conditions (agent of decision-making: human versus algorithm) and measured their permissibility, liking, and willingness to utilize the agent. Experiment 1 was designed to be consistent with Experiment 2. The only difference was an additional measurement of perceived transparency to test the mediating role. Experiment 3 aimed to establish a causal chain between the mediator and dependent variables by manipulating the perceived transparency of the algorithm. In Experiment 4, a single-factor three-level between-subjects design (non-anthropomorphism algorithm versus anthropomorphism algorithm versus human) was utilized to explore the boundary condition of this effect.

As anticipated, the present research revealed a pervasive algorithmic aversion across diverse organizational settings. Specifically, we conceptualized algorithm aversion as a tripartite framework encompassing cognitive, affective, and behavioral dimensions. We found that compared with human managers, participants demonstrated significantly lower permissibility (Experiments: 1, 2, and 4), liking (Experiments: 1, 2, and 4), and willingness to utilize (Experiment 2) algorithmic management. And the robustness of this result was demonstrated by the diversity of our scenarios and samples. Additionally, this research discovered perceived transparency as an interpretation mechanism explaining participants’ psychological reactions to different decision-making agents. That is to say, participants were opposed to algorithmic management because they thought its decision processes were more incomprehensible and inaccessible than humans (noted in Experiment 2). Addressing this “black box” phenomenon, Experiment 3 showed that providing more information and principles about algorithmic management positively influenced participants’ attitudes. Crucially, the result also demonstrated the moderating effect of anthropomorphism. The result showed that participants exhibited greater permissibility and liking for the algorithm with human-like characteristics, such as a human-like name and communication style, over more than a mechanized form of the algorithm. This observation underlined the potential of anthropomorphism to ameliorate resistance to algorithmic management.

These results bridge the gap between algorithmic aversion and decision transparency from the social-psychological perspective. Firstly, the present research establishes a three-dimensional (cognitive, affective, and behavioral) dual-perspective (employee and employer) model to elucidate the negative responses toward algorithmic management. Secondly, it reveals that perceived opacity acts as an obstacle to embracing algorithmic decision-making. This finding lays the theoretical foundation of Explainable Artificial Intelligence (XAI) which is conceptualized as a “glass box”. Ultimately, the study highlights the moderating effect of anthropomorphism on algorithmic aversion. This suggests that anthropomorphizing algorithms could be a feasible approach to facilitate the integration of intelligent management systems.

Key words: algorithm aversion, transparency, anthropomorphism, workplace