ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2022, Vol. 30 ›› Issue (5): 1093-1105.doi: 10.3724/SP.J.1042.2022.01093

• Regular Articles • Previous Articles     Next Articles

A three-dimensional motivation model of algorithm aversion

ZAHNG Yuyan1, XU Liying(), YU Feng2(), DING Xiaojun1(), WU Jiahua, ZHAO Liang3   

  1. 1Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430072, China
    2Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
    3Department of Philosophy, School of Humanities and Social Science, Xi?an Jiaotong University, Xi’an 710049, China
    4Department of Publishing Science, School of Information Management, Wuhan University, Wuhan 430072, China
  • Received:2021-07-08 Online:2022-05-15 Published:2022-03-24
  • Contact: XU Liying,YU Feng,DING Xiaojun,WU Jiahua E-mail:psychpedia@whu.edu.cn;liyingxu@mail.tsinghua.edu.cn

Abstract:

In recent years, algorithmic decision-making has rapidly penetrated human social life by virtue of its speedability, accuracy, objectivity and applicability. However, although algorithms are often superior in performance, people are reluctant to use algorithm decisions instead of human decisions - a phenomenon known as algorithm aversion. The three-dimensional motivation model of algorithm aversion summarizes the three main reasons: the doubt of algorithm agents, the lack of moral standing, and the annihilation of human uniqueness. It simulates the intuitive thinking framework of human beings when faced with algorithm decisions, i.e., several progressive questions that humans are expected to ask when faced with algorithm decisions: First, are algorithms capable of making decisions? The answer to this question is often negative. Humans usually doubt and distrust the algorithm’s ability, thus causing algorithm aversion, which is the trust/doubt motivation. Second, even if algorithms are capable of making decisions, does it benefit individuals? Of course, the answer to this question is usually negative as well. The reason why algorithms cannot benefit individuals is that human beings tend to shift responsibility when making decisions, but the lack of moral standing and ability to take responsibility makes algorithms useless for the shifting of responsibility. Therefore, the second motivation of algorithm aversion is responsibility-taking/shifting. Third, even if algorithms are capable and trustworthy to make decisions and take moral responsibility, do algorithm decisions positively impact human beings? The answer to this question is also negative because humans will lose control due to algorithmic decision-making. This will result in the perception of dehumanization due to the annihilation of human identity, thus eventually rejecting the algorithms, which is the motivation of control/loss of control. Given these motivations of algorithm aversion, increasing human trust in algorithms, strengthening algorithm agents’ responsibility, and exploring personalized algorithms to salient human control over algorithms should be three feasible options to weaken algorithm aversion. Future research could further explore the boundary conditions and other possible motivations of algorithm aversion from a more social perspective, such as the need for cognitive closure and psychological connection.

Key words: algorithmic decision making, algorithm aversion, mental motivation, human-robot interaction