ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2024, Vol. 32 ›› Issue (12): 2124-2136.doi: 10.3724/SP.J.1042.2024.02124

• Regular Articles • Previous Articles    

Human-AI mutual trust in the era of artificial general intelligence

QI Yue1,2(), CHEN Junting1,2, QIN Shaotian1,2, DU Feng3,4()   

  1. 1The Department of Psychology, Renmin University of China, Beijing, 100872, China
    2The Laboratory of the Department of Psychology, Renmin University of China, Beijing, 100872, China
    3CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
    4Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2024-01-29 Online:2024-12-15 Published:2024-09-24
  • Contact: QI Yue, DU Feng E-mail:qiy@ruc.edu.cn;duf@psych.ac.cn

Abstract:

With the advancement of technology, the dawn of artificial general intelligence is upon us, heralding a new era for human-machine interaction and relationships. Trust, as the linchpin of human-AI interaction, directly affects the success of the interaction and the user experience. Maintaining an appropriate level of trust can influence the outcomes of human-AI interactions. Currently, the trust relationship between humans and AI is undergoing transformation, yet existing research has not accurately grasped this new type of trust relationship. There are limitations in the understanding of human-AI trust, partly due to the unclear definition of human-AI trust, and partly because the focus has been solely on human trust in AI, neglecting the trust that AI places in humans, and lacking an understanding of the bidirectional trust process in interpersonal interactions.

To address these deficiencies, this study first reviews the definitions of human-machine trust and automated trust from previous research and summarizes the current characteristics of human-AI trust: on one hand, the concealment of AI technology usage makes users unaware of AI's involvement; on the other hand, the current human-AI trust should include AI's trust in humans. In response to these characteristics, this study proposes a new definition of human-AI trust: that is, regardless of the awareness of the presence of AI algorithms, the attitude and confidence held between people and AI systems that believe the other party can help achieve specific goals, and the willingness to accept each other's uncertainty and fragility and bear the corresponding risks during the interaction process. The new definition extends the scope of human-AI trust to situations where users are not aware of AI's involvement and, for the first time, proposes a mutual trust relationship between humans and AI, which also implicitly reveals that human-AI trust is a dynamic process.

Secondly, to overcome the limitations of previous trust models in explaining the dynamic and bidirectional trust relationship between humans and AI, this study, based on a comprehensive review of existing trust models (including the interpersonal trust model, the four-factor model of human-machine trust, the three-factor model of human-automation trust, and the general model for trust decisions), proposes a new human-AI mutual trust model for the new type of bidirectional trust interaction in the era of general artificial intelligence: the Human-AI Dynamic Mutual Trust Model. The model, for the first time, regards humans and AI as equal parties in trust establishment, constructing a dynamic mutual trust framework that includes three phases (initial phase, perception phase, and behavior phase) and two subjects (humans and AI). This framework encompasses various factors such as trust-related experience and trust propensity of the trustor and trustee in the initial phase, perceived factors such as perceived individual state and perceived system state in the perception phase, and result feedback and situational factors in the behavior phase, emphasizing the two important characteristics of “mutual trust” in the relational dimension and "dynamics" in the temporal dimension of human-AI trust.

This study not only provides a clear definition of trust for the new type of trust relationship between humans and AI in the era of artificial intelligence but also proposes a brand-new theoretical model: the Human-AI Dynamic Mutual Trust Model, offering an in-depth theoretical explanation for the dynamic process of human-AI trust. Future research can explore within the framework of human-AI mutual trust how AI's trust in humans is established and maintained, how a quantitative model of human-AI mutual trust can be established, and what the process of human-AI mutual trust is in multi-agent interactions.

Key words: trust, human-machine mutual trust, trust calibration, human-machine relationship, human-AI

CLC Number: