ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

心理科学进展 ›› 2022, Vol. 30 ›› Issue (5): 1093-1105.doi: 10.3724/SP.J.1042.2022.01093

• 研究前沿 • 上一篇    下一篇

算法拒绝的三维动机理论

张语嫣1, 许丽颖2(), 喻丰1(), 丁晓军3, 邬家骅1, 赵靓4   

  1. 1武汉大学哲学学院心理学系, 武汉 430072
    2清华大学社会科学学院心理学系, 北京 100084
    3西安交通大学人文社会科学学院哲学系, 西安 710049
    4武汉大学信息管理学院出版科学系, 武汉 430072
  • 收稿日期:2021-07-08 出版日期:2022-05-15 发布日期:2022-03-24
  • 通讯作者: 许丽颖,喻丰 E-mail:psychpedia@whu.edu.cn;liyingxu@mail.tsinghua.edu.cn
  • 基金资助:
    国家社科基金青年项目(20CZX059);国家自然科学基金青年项目(72101132);中国博士后科学基金面上项目(2021M701960)

A three-dimensional motivation model of algorithm aversion

ZAHNG Yuyan1, XU Liying(), YU Feng2(), DING Xiaojun1(), WU Jiahua, ZHAO Liang3   

  1. 1Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430072, China
    2Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
    3Department of Philosophy, School of Humanities and Social Science, Xi?an Jiaotong University, Xi’an 710049, China
    4Department of Publishing Science, School of Information Management, Wuhan University, Wuhan 430072, China
  • Received:2021-07-08 Online:2022-05-15 Published:2022-03-24
  • Contact: XU Liying,YU Feng,DING Xiaojun,WU Jiahua E-mail:psychpedia@whu.edu.cn;liyingxu@mail.tsinghua.edu.cn

摘要:

算法拒绝意指尽管算法通常能比人类做出更准确的决策, 但人们依然更偏好人类决策的现象。算法拒绝的三维动机理论归纳了算法主体怀疑、道德地位缺失和人类特性湮没这三个主要原因, 分别对应信任、责任和掌控三种心理动机, 并对应改变算法拒绝的三种可行方案: 提高人类对算法的信任度, 强化算法的主体责任, 探索个性化算法设计以突显人对算法决策的控制力。未来研究可进一步以更社会性的视角探究算法拒绝的发生边界和其他可能动机。

关键词: 算法决策, 算法拒绝, 心理动机, 人-机器人交互

Abstract:

In recent years, algorithmic decision-making has rapidly penetrated human social life by virtue of its speedability, accuracy, objectivity and applicability. However, although algorithms are often superior in performance, people are reluctant to use algorithm decisions instead of human decisions - a phenomenon known as algorithm aversion. The three-dimensional motivation model of algorithm aversion summarizes the three main reasons: the doubt of algorithm agents, the lack of moral standing, and the annihilation of human uniqueness. It simulates the intuitive thinking framework of human beings when faced with algorithm decisions, i.e., several progressive questions that humans are expected to ask when faced with algorithm decisions: First, are algorithms capable of making decisions? The answer to this question is often negative. Humans usually doubt and distrust the algorithm’s ability, thus causing algorithm aversion, which is the trust/doubt motivation. Second, even if algorithms are capable of making decisions, does it benefit individuals? Of course, the answer to this question is usually negative as well. The reason why algorithms cannot benefit individuals is that human beings tend to shift responsibility when making decisions, but the lack of moral standing and ability to take responsibility makes algorithms useless for the shifting of responsibility. Therefore, the second motivation of algorithm aversion is responsibility-taking/shifting. Third, even if algorithms are capable and trustworthy to make decisions and take moral responsibility, do algorithm decisions positively impact human beings? The answer to this question is also negative because humans will lose control due to algorithmic decision-making. This will result in the perception of dehumanization due to the annihilation of human identity, thus eventually rejecting the algorithms, which is the motivation of control/loss of control. Given these motivations of algorithm aversion, increasing human trust in algorithms, strengthening algorithm agents’ responsibility, and exploring personalized algorithms to salient human control over algorithms should be three feasible options to weaken algorithm aversion. Future research could further explore the boundary conditions and other possible motivations of algorithm aversion from a more social perspective, such as the need for cognitive closure and psychological connection.

Key words: algorithmic decision making, algorithm aversion, mental motivation, human-robot interaction