ISSN 0439-755X
CN 11-1911/B
主办:中国心理学会
   中国科学院心理研究所
出版:科学出版社

• •    

隐私风险感知对自动驾驶汽车初始信任的影响:从业者与非从业者的差异化反应(心理学与人工智能发展专题)

孙一飞, 李秀兰, 杜峰, 齐玥   

  1. 中国人民大学心理学系,
    中国科学院大学心理学系, 北京 中国
  • 收稿日期:2025-05-11 修回日期:2025-10-25 接受日期:2025-11-07
  • 基金资助:
    国家自然科学基金(32471130, 32000771); 中国人民大学科学研究基金(中央高校基本科研业务费专项资金资助)项目成果(21XNLG13)

The dynamic relationship between privacy risk perception and the formation of trust in autonomous vehicle: differentiated responses of experts and non experts

Sun Yifei, Li Xiulan, Du Feng, Qi Yue   

  1. , ,
    , , China
  • Received:2025-05-11 Revised:2025-10-25 Accepted:2025-11-07

摘要: 过往研究更多关注人们对自动驾驶系统的信任不足,然而,普通消费者的过度信任可能导致对自动驾驶系统的误用,从而带来更高的使用风险。本文聚焦于如何使普通消费者的信任水平与从业者相似,通过三项研究系统探索了从业背景对初始信任的影响。研究1发现,非从业者表现出过度信任倾向,并且隐私风险感知和从业背景对初始信任的影响存在交互作用。研究2通过操纵隐私风险水平发现,提升隐私风险水平会显著增强非从业者的隐私风险感知,并降低其初始信任;而从业者的初始信任较少受到隐私风险水平变化的影响。研究3进一步揭示了非从业者对隐私风险信息的非对称反应:在低风险情境下,尽管隐私风险感知显著提升,初始信任无显著变化;而在高风险情境下,隐私风险感知显著增加,初始信任显著下降。这些结果揭示了从业背景与隐私风险感知对自动驾驶初始信任的交互作用,凸显了从业者与非从业者对于自动驾驶信任的区别,并强调了自动驾驶汽车的开发者需要考虑采用更具针对性的信任校准策略以应对从业者与非从业者的差异化反应。

关键词: 自动驾驶, 人机信任, 人与AI信任

Abstract: This study investigates the role of professional background in shaping initial trust in autonomous vehicles (AVs), with a particular focus on how privacy risk perception influences trust differences between professionals and non-professionals. Previous research has primarily concentrated on the issue of insufficient trust in AVs and sought to enhance trust through improved design and communication. However, excessive trust among ordinary consumers may equally lead to greater risks and hazards. Therefore, how to calibrate consumers’ trust to be closer to that of industry professionals—who are also the system designers—has become a pressing issue. Although existing studies have examined various perceptual factors affecting trust in AVs, there remains a lack of systematic evidence on how prior experiential differences, represented by professional background, shape initial trust. Drawing on three empirical studies with a total of 1,027 participants, this research systematically examines the mechanisms through which privacy risk perception influences trust formation. Study 1 employed an online survey to compare professionals and non-professionals. Results showed that non-professionals reported significantly higher initial trust in AVs, were more likely to overestimate system performance, underestimate potential risks, and be more susceptible to social influence. Regression analyses further revealed an interaction effect between privacy risk perception and professional background: privacy risk perception significantly predicted professionals’ trust levels but had no significant effect on non-professionals. In addition, social influence, perceived usefulness, and safety risk perception jointly predicted initial trust, indicating that trust formation is a complex process shaped by multiple interacting factors. Study 2 experimentally manipulated privacy risk levels to further explore the interaction between privacy risk perception and professional background. Results demonstrated that increased privacy risk significantly reduced professionals’ trust, whereas non-professionals’ trust fluctuated more dramatically. A moderated mediation analysis showed that privacy risk level predicted non-professionals’ trust through privacy risk perception, but the effect was nonsignificant for professionals. This suggests that professionals’ trust is relatively stable, while non-professionals—lacking accurate recognition of privacy risks—are more sensitive to contextual changes. Study 3 examined the impact of enhancing privacy risk perception. Results indicated that increasing non-professionals’ privacy risk awareness heightened their sensitivity to risks, thereby significantly reducing their initial trust. This finding suggests that enhancing privacy risk perception among non-professionals can effectively mitigate excessive trust and narrow the trust gap with professionals. Taken together, this research reveals that professionals and non-professionals rely on different cognitive pathways in forming initial trust in AVs, and clarifies the interactive mechanism between privacy risk perception and professional background. The findings provide important implications for strategies to strengthen consumer trust in AVs. Specifically, enhancing non-professionals’ privacy risk perception helps calibrate excessive trust and narrow the trust gap, while social influence and perceived usefulness also play critical roles in shaping trust. Leveraging social influence and emphasizing the usefulness of AVs may be effective approaches to promoting rational trust. Overall, this research deepens the understanding of trust formation mechanisms in human–machine interaction and offers practical insights for fostering more rational and well-calibrated public trust in autonomous vehicles.

Key words: Autonomous vehicle, human-machine trust, human-AI trust