ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2025, Vol. 33 ›› Issue (6): 916-932.doi: 10.3724/SP.J.1042.2025.0916

• Academic Papers of the 27th Annual Meeting of the China Association for Science and Technology • Previous Articles     Next Articles

The bidirectional trust in the context of new human-machine relationships

XIE Yubin1,2, ZHOU Ronggang1,3,4   

  1. 1School of Economics and Management, Beihang University, Beijing 100191, China;
    2Department of Systems Engineering, City University of Hong Kong, Hong Kong 999077, China;
    3Key Laboratory of Data Intelligence and Management, Beihang University, Beijing 100191, China;
    4Laboratory for Low-carbon Intelligent Governance, Beihang University, Beijing 100191, China
  • Received:2024-10-12 Online:2025-06-15 Published:2025-04-09

Abstract: In the context of the rapid development of artificial intelligence, the relationship between humans and machines is shifting from the traditional “auxiliary-subordinate” model to a “equal collaboration” or even “symbiotic coevolution” model. Most of research has mainly focused on one-directional trust from humans to machines, but as intelligent agent gains greater autonomy and decision-making capabilities, mutual trust is becoming a central issue in human-machine collaboration. This paper examines the mechanisms for building mutual trust between humans and machines. It also explores measurement methods, and the practical challenges involved. The goal is to provide theoretical support for the design and optimization of future intelligent systems.
Building on existing human-machine trust frameworks, this paper proposes a dynamic model of mutual trust. It introduces a three-stage model: “dispositional trust-perceived trust-behavioral trust,” which covers both human-to-machine and machine-to-human trust. The model emphasizes perceived trust as the key bridge between dispositional trust and behavioral trust, highlighting its role in the transfer of trust between AI, intelligent agents, and humans. Dispositional trust: The initial stage of trust, which is based on an individual’s inherent traits and is independent of specific contexts. It lays the foundation for the subsequent development of trust. Perceived trust: Gradually formed during interaction, this stage reflects the dynamic perception of the other party’s behavior, attitude, and trustworthiness. It is the core of emotional trust transfer and dynamic adjustment. Behavioral trust: The final manifestation of trust, expressed through concrete behaviors such as dependence, cooperation, and action. It is post-action trust based on behavioral feedback, reflecting the ultimate outcome of the trust relationship.
The advantages of this model are reflected in several key aspects. First, its dynamic evolutionary characteristics allow the model to fully capture the development of trust from dispositional trust to perceived trust and finally to behavioral trust, effectively accommodating the complexity and variability of trust relationships in human-machine interactions. Second, the model emphasizes bidirectional trust transfer, focusing on the interaction between humans and intelligent agents. It highlights the role of perceived trust as the crucial bridge between dispositional trust and behavioral trust, offering in-depth insights into its significance in emotional trust transfer and dynamic adjustment, thus providing unique guidance for optimizing human-machine interaction. Third, the model introduces an expanded perspective on dispositional trust by incorporating algorithmic trust, exploring the sources of initial trust in algorithms and the impact of individual algorithm aversion, thereby offering a new theoretical foundation for algorithmic trust research. Lastly, the model provides an in-depth analysis of behavioral trust, emphasizing the impact of machine behavior on human-machine trust, such as the negative effect on “perceived trustworthiness” when a machine denies a human request, and revealing the emotional and behavioral consequences of trust misalignment.
The purpose of this theoretical model is to develop methods for measuring and modeling human-machine mutual trust based on the characteristics of different scenarios. Building on a review of existing measurement methods and drawing from interpersonal trust measurement experience, this paper introduces a framework and methods for mutual trust between humans and machines. The study focuses on several key areas: developing stage-specific measurement tools for dispositional trust, perceived trust, and behavioral trust; exploring multidimensional, multilevel methods that combine subjective reports, physiological signals, and behavioral data to create a dynamic monitoring and calibration system; and adapting interpersonal trust quantification methods to design trust modeling tools suited to human-machine interactions. Ultimately, this research aims to provide a systematic, operable framework for measuring and modeling mutual trust, laying the foundation for dynamic evaluation and intelligent adjustment.
In terms of application, this paper examines the practical value of mutual trust through case studies in autonomous driving and aviation. It also discusses current challenges, such as individual differences that hinder trust development, the lack of standardized tools for measuring machine trust, and the unclear long-term psychological effects of mutual trust on users. The paper calls for further research to refine trust measurement tools, address issues of “over-trust” or “mistrust” in human-machine trust alignment and define the boundaries of machine trust behavior within ethical and legal frameworks. By integrating theoretical and methodological innovations, this paper offers new directions for research on trust mechanisms in human-machine collaboration and provides valuable guidance for the development of efficient and safe intelligent systems.

Key words: artificial intelligence, human-machine mutual trust, trust, trust measurement, human-machine teams

CLC Number: