ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2025, Vol. 57 ›› Issue (11): 1951-1972.doi: 10.3724/SP.J.1041.2025.1951

• Reports of Empirical Studies • Previous Articles     Next Articles

Safety trust in intelligent domestic robots: Human and AI perspectives on trust and relevant influencing factors

YOU Shanshan1,2, QI Yue1,2(), CHEN JunTing1,2, LUO Lei1,2, ZHANG Kan3,4   

  1. 1The Department of Psychology, Renmin University of China, Beijing 100872, China
    2The Laboratory of the Department of Psychology, Renmin University of China, Beijing 100872, China
    3State Key Laboratory of Cognitive Science and Mental Health, Chinese Academy of Sciences, Beijing 100101, China
    4Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
  • Published:2025-11-25 Online:2025-09-25
  • Contact: Yue Qi, Department of Psychology, Renmin University of China, No. 59 Zhongguancun Street, Haidian District, Beijing, 100872, China. Email address: qiy@ruc.edu.cn.
  • Supported by:
    National Natural Science Foundation of China(32471130);National Natural Science Foundation of China(32000771);Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China(21XNLG13);fund for building world-class universities (disciplines) of Renmin University of China. Project No. 2018(RUCPSY0007)

Abstract:

As a result of the rapid development of intelligent domestic robot technology, safety concerns have emerged as a new challenge in human?robot trust dynamics. This study explores and validates novel critical dimensions of trust that influence human and AI users’ perceptions of intelligent domestic robots, with a particular focus on safety trust. The research involves three comprehensive studies, each of which addresses different aspects of these dimensions.

In Study 1, we developed a safety trust scale pertaining specifically to intelligent domestic robots. This scale was rigorously tested to confirm the stability and validity of its three-dimensional structure, which included performance, relational, and safety trust. The scale’s psychometric properties were evaluated on the basis of factor analysis and reliability testing, thereby ensuring that it could accurately measure trust across different contexts and populations.

Study 2 explored the static characteristics of robots, such as their anthropomorphism, their height, and the visibility of their embedded cameras. We revealed that human participants exhibited higher levels of safety trust toward robots that were shorter in height and had fewer conspicuous cameras. Interestingly, the degree of anthropomorphism was determined to play a significant role in determining participants’ sensitivity to these static features.

Study 3 expanded the investigation to encompass the dynamic characteristics of robots, such as movement speed, interaction scenario and camera operation (i.e., turning the camera off). The results indicated that slower-moving robots were generally perceived as safer, and higher levels of safety trust were attributed to them. Moreover, the action of turning off a robot’s camera during interactions was observed to significantly enhance safety trust among human users. The study also highlighted the fact that the influence of these dynamic features varied across different interaction scenarios, thus suggesting that situational factors play crucial roles in shaping trust perceptions.

Furthermore, a comparative analysis between human and AI users revealed a certain degree of consistency in safety trust judgments. Both human and AI users were generally aligned in terms of their trust assessments on the basis of both static and dynamic robot features. However, the AI’s sensitivity to the visibility of robot cameras was notably lower than that of humans, thus suggesting that AI may prioritize different factors in the context of assessing safety trust.

Overall, the findings of this research provide valuable insights into the design and manufacturing of intelligent domestic robots, including by emphasizing the importance of considering both static and dynamic features in the process of enhancing safety trust. The results also offer theoretical and practical guidance for the development of trust models that can be applied in various intelligent home environments, thereby ultimately contributing to the advancement of human?robot interactions.

Key words: human?robot trust, safety trust, intelligent domestic robots, user intention, LLM