ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2024, Vol. 32 ›› Issue (3): 527-542.doi: 10.3724/SP.J.1042.2024.00527

• Regular Articles • Previous Articles     Next Articles

Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction

HUANG Xinyu, LI Ye()   

  1. School of Psychology, Central China Normal University & Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan 430079, China
  • Received:2023-06-23 Online:2024-03-15 Published:2024-01-19

Abstract:

Trust is the foundation of human-robot cooperation. Due to the dynamic nature of trust, over-trust and under-trust may occur during human-robot interaction, eventually jeopardize human-robot trust (HRT). Maintaining an appropriate level of trust requires accurate calibration between individual perceived reliability and actual reliability. Previous research have investigated the causes of over-trust and under-trust in HRT, and provided corresponding trust calibration strategies. However, these studies are relatively scattered and the effectiveness of trust calibration strategies is still controversial. Besides most previous studies only focus on over-trust or under-trust, ignoring the necessity and importance of integrating over-trust, under-trust and trust calibration from the overall perspective. In this paper, we use the term “trust bias” to define the inappropriate trust level during human-robot interaction, which means the individual’ s trust towards the robot deviates from the calibration value due to the false estimation of the robot reliability. Trust bias contains both over-trust and under-trust. Second, we name the strategy to improve the low trust level as “trust promote” instead of “trust repair”. Because we believe that “trust repair” focuses more on improving the low trust level of individuals after the trust violation rather than improve the initial low trust level of individuals.

Based on this, we starts with the causes of over-trust and under-trust in HRT, points out how robot-related, human-related and environmental factors affect HRT. Specifically, we conclude two main robot-related factors of trust bias: reliability and embodiment. So we suggest designers can improve the transparency of robot to calibrate people’s trust, by the way robot itself can also use some trust repair strategies such as apology, denial, commitment, blame and so on after trust level dropped down. For human-related trust bias factors, we think motivation, self-confidence, algorithm attitude (algorithm appreciation and algorithm aversion), mental models are main contributors. Corresponding, calibration requires human reach more contacts to robots in order to improve algorithm literacy, as well as lowing their expectation. Also, we claim people may fall into trust bias in some special situations while risky or time-pressure, so cognitive forcing training may be critical.

We discuss the boundary conditions of the trust calibration strategy in HRT and set up a research agenda. Regarding of the measurement, we suggest researchers should not only focus on the people’s external trust attitude, but also focus on the people’s implicit trust attitude to better test the effectiveness and practicability of the calibration strategy. Taking trust inhibition as an example, in the future, we can not only test whether the dampening strategy is effective through the trust scale, but also explore whether the implicit trust level of people decreases after the trust dampening. In addition, future studies suggest further optimize the measurement of methods, develop high reliable scales to detect HRT.

Secondly, since full trust calibration cycle often experiences three phases: trust building-trust growth / impaired-trust calibration. Previous HRT cognitive neural research focus on the first two stages. In the future, researchers can use physiological indicators to monitor the change process of individual trust neural activity from the beginning of trust establishment to the beginning of trust calibration in real time, and further reveal the dynamic development of individual trust from the physiological level.

Third, the research of HRT focuses on humanoid robots and mechanized robots, while less attention is paid to the role of animal robots in the trust calibration, especially the “cute” animal robots. Cute robots may be able to change human’s biases to increase initial trust levels; After a trust violation, cute animal robots may also reduce trust levels more slowly and easier to repair. Future studies can examine the relationship between animal robots and trust.

Fourth, some researchers have begun to pay attention to the changing development of the trust level of individuals in the group, rather than interacting with the robot alone. The human-robot trust level difference between Chinese and Western participants can be compared through cross-cultural methods and further investigate how to conduct trust calibration within the group. In addition, the difference and commonness between individual trust bias and group trust bias can be compared, and appropriate strategies for group trust bias calibration can be explored.

Finally, the success of trust calibration also depends on individual factors, and there may be individual differences in the effectiveness of calibration strategies. In line with the popular approach, researchers are encouraged to model trust-related behaviors to calibrate trust in a personalized way.

Key words: trust calibration, trust bias, trust dampening, trust promoting, human-robot interaction

CLC Number: