ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

心理科学进展 ›› 2024, Vol. 32 ›› Issue (3): 527-542.doi: 10.3724/SP.J.1042.2024.00527

• • 上一篇    下一篇

人机信任校准的双途径:信任抑制与信任提升

黄心语, 李晔()   

  1. 华中师范大学心理学院暨青少年网络心理与行为教育部重点实验室, 武汉 430079
  • 收稿日期:2023-06-23 出版日期:2024-03-15 发布日期:2024-01-19
  • 通讯作者: 李晔, E-mail: liye@ccnu.edu.cn
  • 基金资助:
    国家自然科学基金面上项目(72371113);国家自然科学基金面上项目(71771102)

Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction

HUANG Xinyu, LI Ye()   

  1. School of Psychology, Central China Normal University & Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan 430079, China
  • Received:2023-06-23 Online:2024-03-15 Published:2024-01-19

摘要:

信任是人机成功合作的基础。但个体在人机交互中并不总是持有恰当的信任水平, 也可能会出现信任偏差:过度信任和信任不足。信任偏差会妨碍人机合作, 因此需要对信任进行校准。信任校准常常通过信任抑制与信任提升两条途径来实现。信任抑制聚焦于如何降低个体对机器人过高的信任水平, 信任提升则侧重于如何提高个体对机器人较低的信任水平。未来研究可进一步优化校准效果评估的测量方法、揭示信任校准过程中以及信任校准后个体的认知变化机制、探索信任校准的边界条件以及个性化和精细化的信任校准策略, 以期助推人机协作。

关键词: 信任校准, 信任偏差, 信任抑制, 信任提升, 人机交互

Abstract:

Trust is the foundation of human-robot cooperation. Due to the dynamic nature of trust, over-trust and under-trust may occur during human-robot interaction, eventually jeopardize human-robot trust (HRT). Maintaining an appropriate level of trust requires accurate calibration between individual perceived reliability and actual reliability. Previous research have investigated the causes of over-trust and under-trust in HRT, and provided corresponding trust calibration strategies. However, these studies are relatively scattered and the effectiveness of trust calibration strategies is still controversial. Besides most previous studies only focus on over-trust or under-trust, ignoring the necessity and importance of integrating over-trust, under-trust and trust calibration from the overall perspective. In this paper, we use the term “trust bias” to define the inappropriate trust level during human-robot interaction, which means the individual’ s trust towards the robot deviates from the calibration value due to the false estimation of the robot reliability. Trust bias contains both over-trust and under-trust. Second, we name the strategy to improve the low trust level as “trust promote” instead of “trust repair”. Because we believe that “trust repair” focuses more on improving the low trust level of individuals after the trust violation rather than improve the initial low trust level of individuals.

Based on this, we starts with the causes of over-trust and under-trust in HRT, points out how robot-related, human-related and environmental factors affect HRT. Specifically, we conclude two main robot-related factors of trust bias: reliability and embodiment. So we suggest designers can improve the transparency of robot to calibrate people’s trust, by the way robot itself can also use some trust repair strategies such as apology, denial, commitment, blame and so on after trust level dropped down. For human-related trust bias factors, we think motivation, self-confidence, algorithm attitude (algorithm appreciation and algorithm aversion), mental models are main contributors. Corresponding, calibration requires human reach more contacts to robots in order to improve algorithm literacy, as well as lowing their expectation. Also, we claim people may fall into trust bias in some special situations while risky or time-pressure, so cognitive forcing training may be critical.

We discuss the boundary conditions of the trust calibration strategy in HRT and set up a research agenda. Regarding of the measurement, we suggest researchers should not only focus on the people’s external trust attitude, but also focus on the people’s implicit trust attitude to better test the effectiveness and practicability of the calibration strategy. Taking trust inhibition as an example, in the future, we can not only test whether the dampening strategy is effective through the trust scale, but also explore whether the implicit trust level of people decreases after the trust dampening. In addition, future studies suggest further optimize the measurement of methods, develop high reliable scales to detect HRT.

Secondly, since full trust calibration cycle often experiences three phases: trust building-trust growth / impaired-trust calibration. Previous HRT cognitive neural research focus on the first two stages. In the future, researchers can use physiological indicators to monitor the change process of individual trust neural activity from the beginning of trust establishment to the beginning of trust calibration in real time, and further reveal the dynamic development of individual trust from the physiological level.

Third, the research of HRT focuses on humanoid robots and mechanized robots, while less attention is paid to the role of animal robots in the trust calibration, especially the “cute” animal robots. Cute robots may be able to change human’s biases to increase initial trust levels; After a trust violation, cute animal robots may also reduce trust levels more slowly and easier to repair. Future studies can examine the relationship between animal robots and trust.

Fourth, some researchers have begun to pay attention to the changing development of the trust level of individuals in the group, rather than interacting with the robot alone. The human-robot trust level difference between Chinese and Western participants can be compared through cross-cultural methods and further investigate how to conduct trust calibration within the group. In addition, the difference and commonness between individual trust bias and group trust bias can be compared, and appropriate strategies for group trust bias calibration can be explored.

Finally, the success of trust calibration also depends on individual factors, and there may be individual differences in the effectiveness of calibration strategies. In line with the popular approach, researchers are encouraged to model trust-related behaviors to calibrate trust in a personalized way.

Key words: trust calibration, trust bias, trust dampening, trust promoting, human-robot interaction

中图分类号: