ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

心理科学进展 ›› 2025, Vol. 33 ›› Issue (6): 916-932.doi: 10.3724/SP.J.1042.2025.0916 cstr: 32111.14.2025.0916

• 第二十七届中国科协年会学术论文 • 上一篇    下一篇

新型人机关系下的人机双向信任

解煜彬1,2, 周荣刚1,3,4()   

  1. 1北京航空航天大学经济管理学院, 北京 100191
    2香港城市大学工学院系统工程系, 香港 999077
    3数据智能与智慧管理工信部重点实验室, 北京 100191
    4低碳治理与政策智能实验室, 北京 100191
  • 收稿日期:2024-10-12 出版日期:2025-06-15 发布日期:2025-04-09
  • 通讯作者: 周荣刚, E-mail: zhrg@buaa.edu.cn
  • 基金资助:
    航空科学基金(2024Z074051003);国家自然科学基金(72171015);国家自然科学基金(72021001);未来区块链与隐私计算高精尖创新中心资助

The bidirectional trust in the context of new human-machine relationships

XIE Yubin1,2, ZHOU Ronggang1,3,4()   

  1. 1School of Economics and Management, Beihang University, Beijing 100191, China
    2Department of Systems Engineering, City University of Hong Kong, Hong Kong 999077, China
    3Key Laboratory of Data Intelligence and Management, Beihang University, Beijing 100191, China
    4Laboratory for Low-carbon Intelligent Governance, Beihang University, Beijing 100191, China
  • Received:2024-10-12 Online:2025-06-15 Published:2025-04-09

摘要:

随着人工智能技术的迅猛发展, 人类与机器的互动频率不断增加, 互动方式也日益复杂。传统的人机信任模型多聚焦于单向信任, 即人类对机器的信任。然而, 随着智能系统逐渐具备自主性和决策能力, 人机信任的双向性逐渐成为研究的核心议题。本研究在回顾近年来人机信任理论模型的基础上, 提出了基于倾向信任、感知信任和行为信任的人机双向信任理论结构模型, 特别强调了“感知信任”作为人机互信的交互渠道的重要作用。此外, 本文系统梳理了人机信任测量与计算建模方法的最新进展, 重点探讨了机器对人类信任的测量方法及其实践意义, 并提出了未来研究方向, 以期为人机协作领域的理论发展和技术应用提供新的视角和指导框架。

关键词: 人工智能, 人机互信, 信任, 信任测量, 人机团队

Abstract:

In the context of the rapid development of artificial intelligence, the relationship between humans and machines is shifting from the traditional “auxiliary-subordinate” model to a “equal collaboration” or even “symbiotic coevolution” model. Most of research has mainly focused on one-directional trust from humans to machines, but as intelligent agent gains greater autonomy and decision-making capabilities, mutual trust is becoming a central issue in human-machine collaboration. This paper examines the mechanisms for building mutual trust between humans and machines. It also explores measurement methods, and the practical challenges involved. The goal is to provide theoretical support for the design and optimization of future intelligent systems.

Building on existing human-machine trust frameworks, this paper proposes a dynamic model of mutual trust. It introduces a three-stage model: “dispositional trust-perceived trust-behavioral trust,” which covers both human-to-machine and machine-to-human trust. The model emphasizes perceived trust as the key bridge between dispositional trust and behavioral trust, highlighting its role in the transfer of trust between AI, intelligent agents, and humans. Dispositional trust: The initial stage of trust, which is based on an individual's inherent traits and is independent of specific contexts. It lays the foundation for the subsequent development of trust. Perceived trust: Gradually formed during interaction, this stage reflects the dynamic perception of the other party's behavior, attitude, and trustworthiness. It is the core of emotional trust transfer and dynamic adjustment. Behavioral trust: The final manifestation of trust, expressed through concrete behaviors such as dependence, cooperation, and action. It is post-action trust based on behavioral feedback, reflecting the ultimate outcome of the trust relationship.

The advantages of this model are reflected in several key aspects. First, its dynamic evolutionary characteristics allow the model to fully capture the development of trust from dispositional trust to perceived trust and finally to behavioral trust, effectively accommodating the complexity and variability of trust relationships in human-machine interactions. Second, the model emphasizes bidirectional trust transfer, focusing on the interaction between humans and intelligent agents. It highlights the role of perceived trust as the crucial bridge between dispositional trust and behavioral trust, offering in-depth insights into its significance in emotional trust transfer and dynamic adjustment, thus providing unique guidance for optimizing human-machine interaction. Third, the model introduces an expanded perspective on dispositional trust by incorporating algorithmic trust, exploring the sources of initial trust in algorithms and the impact of individual algorithm aversion, thereby offering a new theoretical foundation for algorithmic trust research. Lastly, the model provides an in-depth analysis of behavioral trust, emphasizing the impact of machine behavior on human-machine trust, such as the negative effect on “perceived trustworthiness” when a machine denies a human request, and revealing the emotional and behavioral consequences of trust misalignment.

The purpose of this theoretical model is to develop methods for measuring and modeling human-machine mutual trust based on the characteristics of different scenarios. Building on a review of existing measurement methods and drawing from interpersonal trust measurement experience, this paper introduces a framework and methods for mutual trust between humans and machines. The study focuses on several key areas: developing stage-specific measurement tools for dispositional trust, perceived trust, and behavioral trust; exploring multidimensional, multilevel methods that combine subjective reports, physiological signals, and behavioral data to create a dynamic monitoring and calibration system; and adapting interpersonal trust quantification methods to design trust modeling tools suited to human-machine interactions. Ultimately, this research aims to provide a systematic, operable framework for measuring and modeling mutual trust, laying the foundation for dynamic evaluation and intelligent adjustment.

In terms of application, this paper examines the practical value of mutual trust through case studies in autonomous driving and aviation. It also discusses current challenges, such as individual differences that hinder trust development, the lack of standardized tools for measuring machine trust, and the unclear long-term psychological effects of mutual trust on users. The paper calls for further research to refine trust measurement tools, address issues of “over-trust” or “mistrust” in human-machine trust alignment and define the boundaries of machine trust behavior within ethical and legal frameworks. By integrating theoretical and methodological innovations, this paper offers new directions for research on trust mechanisms in human-machine collaboration and provides valuable guidance for the development of efficient and safe intelligent systems.

Key words: artificial intelligence, human-machine mutual trust, trust, trust measurement, human-machine teams

中图分类号: