ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

• •    

不同信任主体下基于经验迁移的信任建立:人与人工智能的比较

齐玥, 谢染, 由姗姗, 李橦   

  1. 中国人民大学心理学系, 北京 100872 中国
  • 收稿日期:2025-12-06 修回日期:2026-01-08 接受日期:2026-01-16
  • 基金资助:
    国家自然科学基金(32471130, 32000771); 中国人民大学心理学系人民心理创新研究基金; 中国人民大学科学研究基金(中央高校基本科研业务费专项资金资助)项目成果(21XNLG13)

Trust Formation Through Experience Transfer Across Different Trust Agents: A Comparison Between Humans and Artificial Intelligence

Qi Yue   

  1. , Renmin University of China 100872, China
  • Received:2025-12-06 Revised:2026-01-08 Accepted:2026-01-16
  • Supported by:
    National Natural Science Foundation of China(32471130, 32000771); the People's Psychology Innovation Research Fund of the Department of Psychology at Renmin University of China; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China(21XNLG13)

摘要: 随着人与人工智能(AI)从工具式使用逐步转化为新型的社会关系,信任的主体也从人类拓展到AI,即从传统的人对机器系统的信任扩展为人与AI之间的互信,增加了AI对人的信任以及AI对AI的信任。然而,较少有研究整合人机信任和人际信任两个领域的理论模型,信任的机制也尚不明确,忽略先验知识的影响,导致过往的研究结论存在矛盾。本研究融合社会心理学与工程心理学的视角出发,在人与AI动态互信模型的基础上提出了基于经验迁移建立信任的核心机制,并围绕三个关键问题展开探讨:(1)不同信任主体如何通过学习相关经验影响信任的建立;(2)这些经验是否能迁移至新的信任对象与情境;(3)经验的学习与迁移如何受到个体特征与互动过程特征的调节。通过引入AI代理的新实验范式,本研究系统考察了信任建立与更新的基础机制,构建了一个双主体的人-AI互信模型,为可信AI的设计以及促进多智能体的协作提供了新的理论和实证支持。

关键词: 人机信任, 经验迁移, 人-AI互信, 人际信任

Abstract: As interactions between humans and artificial intelligence (AI) evolve from instrumental tool use toward novel forms of social relationships, the scope of trust agents has expanded beyond humans to include AI. This shift broadens traditional human–machine trust into a bidirectional construct of human–AI mutual trust, while also introducing trust from AI toward humans and trust among AI agents. However, existing research rarely integrates theoretical models from both interpersonal trust and human–machine trust. The underlying mechanisms of trust formation remain unclear, particularly due to limited attention to the role of prior knowledge—leading to contradictory conclusions in the literature. Integrating social and engineering psychological perspectives and grounded in a dynamic human–AI mutual trust framework, the present study proposes experience-based transfer as a core mechanism of trust formation. We examine three questions: (1) How do different trust agents—humans and AI—use learned experiences to shape initial and ongoing trust? (2) Can these experiences be transferred to new trust targets and novel contexts? (3) How are experience learning and transfer moderated by individual characteristics and interaction-process features? Using a new experimental paradigm incorporating AI agents as active participants, the study systematically investigates the fundamental processes of trust formation and updating. The findings contribute a dual-subject model of human–AI mutual trust and offer theoretical and empirical foundations for designing trustworthy AI and enhancing collaboration among multiple intelligent agents.

Key words: human–machine trust, experience transfer, human-AI mutual trust, interpersonal trust