ISSN 0439-755X
CN 11-1911/B
主办:中国心理学会
   中国科学院心理研究所
出版:科学出版社

心理学报 ›› 2025, Vol. 57 ›› Issue (11): 1885-1900.doi: 10.3724/SP.J.1041.2025.1885 cstr: 32110.14.2025.1885

• 人工智能心理与治理专刊 • 上一篇    下一篇

人机合作使人更冒险: 主体责任感的中介作用

耿晓伟1,2(), 刘超3, 苏黎2, 韩冰雪2, 张巧明4, 吴明证5()   

  1. 1 浙江省哲学社会科学培育实验室“杭州师范大学婴幼儿发展与托育实验室”, 杭州 311121
    2 杭州师范大学心理系, 杭州 311121
    3 滨州职业学院马克思主义学院, 山东 滨州 256600
    4 鲁东大学教育学院, 山东 烟台 264025
    5 浙江大学心理与行为科学系, 杭州 310028
  • 收稿日期:2024-02-08 发布日期:2025-09-24 出版日期:2025-11-25
  • 通讯作者: 耿晓伟, E-mail: xwgeng@hznu.edu.cn;
    吴明证, E-mail: psywu@zju.edu.cn
  • 基金资助:
    国家自然科学基金项目(71971104);浙江省高校重大人文社科攻关计划规划重点项目(2024GH005)

Human-AI cooperation makes individuals more risk seeking: The mediating role of perceived agentic responsibility

GENG Xiaowei1,2(), LIU Chao3, SU Li2, HAN Bingxue2, ZHANG Qiaoming4, WU Mingzheng5()   

  1. 1 Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou 311121, China
    2 Department of Psychology, Hangzhou Normal University, Hangzhou 311121, China
    3 School of Marxism, Binzhou Polytechnic, Binzhou 256600, China
    4 College of Education, Ludong University, Yantai 264025, China
    5 Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 311121, China
  • Received:2024-02-08 Online:2025-09-24 Published:2025-11-25

摘要:

随着人工智能技术的迅猛发展, 人工智能越来越成为人的“助攻”。在人机合作风险决策的过程中, 人工智能是否会助长人类的冒险行为, 以及人知觉到的主体责任如何发挥作用, 这些问题亟待澄清。为了考察人−机合作对个体风险决策的影响及其机制, 进行了4个实验。结果发现: (1)不管与人合作, 还是与人工智能合作, 个体都比单独做决策时更保守; “人−机”合作比“人−人”合作时, 个体更冒险。(2)个体在合作中知觉到的主体责任部分中介了“人−机”合作对风险决策的影响, 人机合作时, 个体知觉到的主体责任更大, 从而在风险决策中更冒险。(3)成功反馈时, 人机合作情景下, 个体更多将责任归于自己, 知觉到的主体责任在人机合作对风险决策影响中起中介作用; 失败反馈时, 人机合作与“人−人”合作之间知觉到的主体责任差异不显著, 知觉到的主体责任的中介作用不成立。

关键词: “人-机”合作, “人-人”合作, 风险决策, 知觉到的主体责任, 结果反馈

Abstract:

Risk decision-making involves choices made by individuals when they are uncertain about future outcomes. With advancements in artificial intelligence (AI), AI can now assist humans in making decisions. For instance, human drivers and AI drivers cooperate to carry out driving tasks, human doctors and AI doctors can collaborate on medical decisions. Currently, it is unclear how AI affects individuals’ risk decision-making during such collaborations, which is crucial for enhancing the quality of human-AI decision-making. Therefore, studying the impact of human-AI cooperation on individuals’ risk decision-making is essential.

In Experiment 1a, a total of 100 participants were recruited from one university. Employing a within-subject design, the independent variable was the partner type (i.e., human-human cooperation, human-AI cooperation, or no partner), while the dependent variable measured individuals’ risk decision-making using the Balloon Analogue Risk Task (BART). In Experiment 1b, a total of 151 participants were recruited from another university and randomly assigned to two conditions: human-human cooperation and human-AI cooperation. As in Experiment 1a, the dependent variable remained the same. To investigate the mediating role of individual agentic responsibility, Experiment 2 recruited 199 participants from a university. This experiment utilized a between-subjects design, with the independent variable being the partner type (i.e., human-human cooperation or human-robot cooperation). Individual agentic responsibility was assessed by measuring the extent to which participants assumed responsibility for their tasks, and the dependent variable was individuals’ risk levels as measured by the BART. Experiment 3 further explored the moderating effect of outcome feedback. Participants received feedback based on their BART performance in Experiment 2, categorized as success or failure, and then assessed their perceived agentic responsibility before completing the BART again.

The results of Experiment 1a and 1b showed that participants in the control group (i.e., without cooperation) exhibited the highest risk-taking behavior, while those engaged in human-AI cooperation took greater risks than those in human-human cooperation. Results from Experiment 2 demonstrated that individual agentic responsibility partially mediated the effect of human-AI cooperation on individuals’ risk decision-making. Specifically, participants reported a higher sense of agentic responsibility in human-AI cooperation compared to human-human cooperation, which contributed to increased risk-taking. Experiment 3 revealed that outcome feedback significantly moderates the mediating role of individual agentic responsibility regarding the influence of human-AI cooperation (versus human-human cooperation) on individuals’ risk decision-making. Notably, under success conditions, participants attributed greater responsibility to themselves in human-AI collaboration compared to human-human collaboration. Conversely, under failure conditions, there was no significant difference in responsibility attribution between the two types of collaboration.

This research demonstrates that collaboration with AI can enhance an individual's propensity for risk-taking. Moreover, the influence of human-AI cooperation, compared to human-human cooperation, on individuals’ risk decision-making is mediated by a sense of individual agentic responsibility and moderated by outcome feedback. These findings offer significant theoretical insights. Furthermore, this study holds substantial practical implications by aiding individuals in understanding how collaboration with AI impacts their risk-taking behaviors.

Key words: human-AI cooperation, human-human cooperation, risk decision making, perceived agentic responsibility, outcome feedback

中图分类号: