%A ZAHNG Yuyan, XU Liying, YU Feng, DING Xiaojun, WU Jiahua, ZHAO Liang %T A three-dimensional motivation model of algorithm aversion %0 Journal Article %D 2022 %J Advances in Psychological Science %R 10.3724/SP.J.1042.2022.01093 %P 1093-1105 %V 30 %N 5 %U {https://journal.psych.ac.cn/xlkxjz/CN/abstract/article_6242.shtml} %8 2022-05-15 %X

In recent years, algorithmic decision-making has rapidly penetrated human social life by virtue of its speedability, accuracy, objectivity and applicability. However, although algorithms are often superior in performance, people are reluctant to use algorithm decisions instead of human decisions - a phenomenon known as algorithm aversion. The three-dimensional motivation model of algorithm aversion summarizes the three main reasons: the doubt of algorithm agents, the lack of moral standing, and the annihilation of human uniqueness. It simulates the intuitive thinking framework of human beings when faced with algorithm decisions, i.e., several progressive questions that humans are expected to ask when faced with algorithm decisions: First, are algorithms capable of making decisions? The answer to this question is often negative. Humans usually doubt and distrust the algorithm’s ability, thus causing algorithm aversion, which is the trust/doubt motivation. Second, even if algorithms are capable of making decisions, does it benefit individuals? Of course, the answer to this question is usually negative as well. The reason why algorithms cannot benefit individuals is that human beings tend to shift responsibility when making decisions, but the lack of moral standing and ability to take responsibility makes algorithms useless for the shifting of responsibility. Therefore, the second motivation of algorithm aversion is responsibility-taking/shifting. Third, even if algorithms are capable and trustworthy to make decisions and take moral responsibility, do algorithm decisions positively impact human beings? The answer to this question is also negative because humans will lose control due to algorithmic decision-making. This will result in the perception of dehumanization due to the annihilation of human identity, thus eventually rejecting the algorithms, which is the motivation of control/loss of control. Given these motivations of algorithm aversion, increasing human trust in algorithms, strengthening algorithm agents’ responsibility, and exploring personalized algorithms to salient human control over algorithms should be three feasible options to weaken algorithm aversion. Future research could further explore the boundary conditions and other possible motivations of algorithm aversion from a more social perspective, such as the need for cognitive closure and psychological connection.