ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

• •    

通用人工智能时代的人与AI信任

齐玥, 陈俊廷, 秦邵天, 杜峰   

  • 收稿日期:2024-01-30 修回日期:2024-05-23 接受日期:2024-07-07
  • 通讯作者: 齐玥
  • 基金资助:
    国家自然科学基金(32000771); 中国人民大学科学研究基金(中央高校基本科研业务费专项资金资助)项目成果(21XNLG13)

Human-AI mutual trust in the era of artificial general intelligence

Qi, Yue, Feng   

  • Received:2024-01-30 Revised:2024-05-23 Accepted:2024-07-07
  • Contact: Qi, Yue
  • Supported by:
    National Natural Science Foundation of China(32000771); the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China(21XNLG13)

摘要: 随着技术的发展,通用人工智能初见雏形,人机交互以及人机关系将进入新的时代。人与人工智能(AI)的信任关系也即将从单方向的人对AI信任逐渐转变为人与AI的互信。本研究在回顾社会心理学中的人际信任模型与工程心理学中的人机信任模型的基础上,从人际信任视角提出了人与AI动态互信模型。该模型将人与AI视为对等的信任建立方,结合信任与被信任方的影响因素、结果反馈和行为调整构建了人与AI动态互信的基本理论框架,强调了人与AI信任中关系维度的“互信”与时程维度的“动态”这两个重要特征。模型首次将AI对人的信任以及二者互信的动态交互过程纳入分析,为人与AI的信任研究提供新的理论视角。未来研究应更多关注AI对人的信任如何建立与维持、人与AI互信的量化模型、以及多智能体交互中的人与AI互信。

关键词: 信任, 人机互信, 信任校准, 人机关系, 人与AI

Abstract: With the development of technology, artificial general intelligence has begun to take shape, ushering in a new era for human-machine interaction and relationships. The trust between humans and artificial intelligence (AI) are on the brink of a transformative shift from unidirectional trust, where people trust AI, to a state of mutual trust between humans and AI. This study, based on a review of the interpersonal trust model in social psychology and the human-machine trust model in engineering psychology, proposes a dynamic mutual trust model for human-AI relationships from the perspective of interpersonal trust. The model regards humans and AI as equal contributors to trust-building, highlighting the “mutual trust” in the relational dimension and the “dynamics” in the temporal dimension of trust between humans and AI. It constructs a fundamental framework for dynamic mutual trust between humans and AI, incorporating influencing factors, result feedback, and behavior adjustment as essential components. This model pioneers the inclusion of AI’s trust towards humans and the dynamic interactive process of mutual trust, offering a new theoretical perspective for the study of trust between humans and AI. Future research should focus on understanding the establishment and maintenance of trust from AI towards humans, developing quantitative models for human-AI trust, and exploring mutual trust dynamics within multi-agent interactions.

Key words: trust, human-machine mutual trust, trust calibration, human-machine relationship, human-AI