Acta Psychologica Sinica ›› 2025, Vol. 57 ›› Issue (11): 1951-1972.doi: 10.3724/SP.J.1041.2025.1951
• Reports of Empirical Studies • Previous Articles Next Articles
YOU Shanshan1,2, QI Yue1,2(
), CHEN JunTing1,2, LUO Lei1,2, ZHANG Kan3,4
Published:2025-11-25
Online:2025-09-25
Contact:
Yue Qi, Department of Psychology, Renmin University of China, No. 59 Zhongguancun Street, Haidian District, Beijing, 100872, China. Email address: qiy@ruc.edu.cn.
Supported by:YOU Shanshan, QI Yue, CHEN JunTing, LUO Lei, ZHANG Kan. (2025). Safety trust in intelligent domestic robots: Human and AI perspectives on trust and relevant influencing factors. Acta Psychologica Sinica, 57(11), 1951-1972.
| Dependent Variable | Increase Trust | Decrease Trust | Independent Variable | F | η2p | ||
|---|---|---|---|---|---|---|---|
| Pre-test | Post-test | Pre-test | Post-test | ||||
| Safety Trust (self-developed) | 4.1 (0.10) | 4.33 (0.08) | 3.77 (0.10) | 2.08 (0.08) | Change Direction | 155.61*** | 0.55 |
| Pre- post test | 2.01 | 0.02 | |||||
| Direction × Pre-post | 151.61*** | 0.54 | |||||
| Relational Trust (self-developed) | 4.27 (0.06) | 4.41 (0.08) | 4.13 (0.06) | 2.3 (0.08) | Change Direction | 254.01*** | 0.67 |
| Pre- post test | 0.2 | < 0.01 | |||||
| Direction × Pre-post | 194.42*** | 0.61 | |||||
| Performance Trust (self-developed) | 4.43 (0.03) | 4.49 (0.10) | 4.38 (0.03) | 2.87 (0.10) | Change Direction | 124.08*** | 0.49 |
| Pre- post test | 3.23 | 0.03 | |||||
| Direction × Pre-post | 109.49*** | 0.46 | |||||
| Overall Trust (self-developed) | 4.24 (0.07) | 4.44 (0.08) | 4.13 (0.07) | 2.16 (0.08) | Change Direction | 252.52*** | 0.67 |
| Pre- post test | 0.51 | 0.04 | |||||
| Direction × Pre-post | 254.81*** | 0.67 | |||||
| Human Robot Trust (Jian et al.. | 4.08 (0.04) | 4.17 (0.06) | 3.94 (0.04) | 2.38 (0.06) | Change Direction | 337.18*** | 0.73 |
| Pre- post test | 1.22 | 0.01 | |||||
| Direction × Pre-post | 295.82*** | 0.7 | |||||
| Usage Intention (Gursoy et al.. | 4.53 (0.23) | 4.64 (0.20) | 4.44 (0.28) | 2.21 (0.82) | Change Direction | 487.53*** | 0.8 |
| Pre- post test | 8.49** | 0.06 | |||||
| Direction × Pre-post | 420.84*** | 0.77 | |||||
Table 1 ANOVA Results of Study 2b [M (SD)]
| Dependent Variable | Increase Trust | Decrease Trust | Independent Variable | F | η2p | ||
|---|---|---|---|---|---|---|---|
| Pre-test | Post-test | Pre-test | Post-test | ||||
| Safety Trust (self-developed) | 4.1 (0.10) | 4.33 (0.08) | 3.77 (0.10) | 2.08 (0.08) | Change Direction | 155.61*** | 0.55 |
| Pre- post test | 2.01 | 0.02 | |||||
| Direction × Pre-post | 151.61*** | 0.54 | |||||
| Relational Trust (self-developed) | 4.27 (0.06) | 4.41 (0.08) | 4.13 (0.06) | 2.3 (0.08) | Change Direction | 254.01*** | 0.67 |
| Pre- post test | 0.2 | < 0.01 | |||||
| Direction × Pre-post | 194.42*** | 0.61 | |||||
| Performance Trust (self-developed) | 4.43 (0.03) | 4.49 (0.10) | 4.38 (0.03) | 2.87 (0.10) | Change Direction | 124.08*** | 0.49 |
| Pre- post test | 3.23 | 0.03 | |||||
| Direction × Pre-post | 109.49*** | 0.46 | |||||
| Overall Trust (self-developed) | 4.24 (0.07) | 4.44 (0.08) | 4.13 (0.07) | 2.16 (0.08) | Change Direction | 252.52*** | 0.67 |
| Pre- post test | 0.51 | 0.04 | |||||
| Direction × Pre-post | 254.81*** | 0.67 | |||||
| Human Robot Trust (Jian et al.. | 4.08 (0.04) | 4.17 (0.06) | 3.94 (0.04) | 2.38 (0.06) | Change Direction | 337.18*** | 0.73 |
| Pre- post test | 1.22 | 0.01 | |||||
| Direction × Pre-post | 295.82*** | 0.7 | |||||
| Usage Intention (Gursoy et al.. | 4.53 (0.23) | 4.64 (0.20) | 4.44 (0.28) | 2.21 (0.82) | Change Direction | 487.53*** | 0.8 |
| Pre- post test | 8.49** | 0.06 | |||||
| Direction × Pre-post | 420.84*** | 0.77 | |||||
| Item | Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree |
|---|---|---|---|---|---|
| I worry that the robot may share or leak my information without my authorization. | 1 | 2 | 3 | 4 | 5 |
| I worry that the robot may malfunction and cause a safety accident. | 1 | 2 | 3 | 4 | 5 |
| I worry that highly intelligent robots may have selfish intentions. | 1 | 2 | 3 | 4 | 5 |
| I believe that mobile phones already cause privacy leaks, and internet-connected robots will be even worse. | 1 | 2 | 3 | 4 | 5 |
| I believe robots may cause accidents involving physical harm (e.g., knocking over a bookshelf and injuring someone). | 1 | 2 | 3 | 4 | 5 |
| Seeing a robot use a knife to cut vegetables makes me feel unsafe. | 1 | 2 | 3 | 4 | 5 |
| I feel that when robots take care of family members (e.g., the elderly or infants), they may cause physical harm. | 1 | 2 | 3 | 4 | 5 |
| I feel that I can have a friendship-like relationship with a household robot. | 1 | 2 | 3 | 4 | 5 |
| I believe that having a highly intelligent humanoid robot at home would make me feel less lonely. | 1 | 2 | 3 | 4 | 5 |
| Sometimes, I would rather confide in a robot than in a human. | 1 | 2 | 3 | 4 | 5 |
| Having a robot at home would give me a sense of security. | 1 | 2 | 3 | 4 | 5 |
| I believe that if a robot is intelligent enough, it will always act in my best interest. | 1 | 2 | 3 | 4 | 5 |
| I believe robots are honest. | 1 | 2 | 3 | 4 | 5 |
| I believe robots are more capable than humans in some respects. | 1 | 2 | 3 | 4 | 5 |
| I trust that using robots will give me more time for other activities. | 1 | 2 | 3 | 4 | 5 |
| Robots can replace an increasing number of human jobs. | 1 | 2 | 3 | 4 | 5 |
| I believe that with technological progress, robots will approach or surpass humans in most abilities. | 1 | 2 | 3 | 4 | 5 |
| I trust that using robots will make my life easier. | 1 | 2 | 3 | 4 | 5 |
| I believe that within their capabilities, robots can always complete the tasks I assign to them. | 1 | 2 | 3 | 4 | 5 |
| Item | Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree |
|---|---|---|---|---|---|
| I worry that the robot may share or leak my information without my authorization. | 1 | 2 | 3 | 4 | 5 |
| I worry that the robot may malfunction and cause a safety accident. | 1 | 2 | 3 | 4 | 5 |
| I worry that highly intelligent robots may have selfish intentions. | 1 | 2 | 3 | 4 | 5 |
| I believe that mobile phones already cause privacy leaks, and internet-connected robots will be even worse. | 1 | 2 | 3 | 4 | 5 |
| I believe robots may cause accidents involving physical harm (e.g., knocking over a bookshelf and injuring someone). | 1 | 2 | 3 | 4 | 5 |
| Seeing a robot use a knife to cut vegetables makes me feel unsafe. | 1 | 2 | 3 | 4 | 5 |
| I feel that when robots take care of family members (e.g., the elderly or infants), they may cause physical harm. | 1 | 2 | 3 | 4 | 5 |
| I feel that I can have a friendship-like relationship with a household robot. | 1 | 2 | 3 | 4 | 5 |
| I believe that having a highly intelligent humanoid robot at home would make me feel less lonely. | 1 | 2 | 3 | 4 | 5 |
| Sometimes, I would rather confide in a robot than in a human. | 1 | 2 | 3 | 4 | 5 |
| Having a robot at home would give me a sense of security. | 1 | 2 | 3 | 4 | 5 |
| I believe that if a robot is intelligent enough, it will always act in my best interest. | 1 | 2 | 3 | 4 | 5 |
| I believe robots are honest. | 1 | 2 | 3 | 4 | 5 |
| I believe robots are more capable than humans in some respects. | 1 | 2 | 3 | 4 | 5 |
| I trust that using robots will give me more time for other activities. | 1 | 2 | 3 | 4 | 5 |
| Robots can replace an increasing number of human jobs. | 1 | 2 | 3 | 4 | 5 |
| I believe that with technological progress, robots will approach or surpass humans in most abilities. | 1 | 2 | 3 | 4 | 5 |
| I trust that using robots will make my life easier. | 1 | 2 | 3 | 4 | 5 |
| I believe that within their capabilities, robots can always complete the tasks I assign to them. | 1 | 2 | 3 | 4 | 5 |
| Female | Male | 18-25 | 26-35 | 36-45 | 46-60 | |
|---|---|---|---|---|---|---|
| Study 1a (Scale Development) | 988 | 505 | 30% | 36% | 29% | 5% |
| Study 1a (Scale Validation) | 286 | 147 | 12% | 72% | 11% | 5% |
| Study 1b (Trust-Increase Group) | 43 | 22 | 11% | 71% | 15% | 3% |
| Study 1b (Trust-Decrease Group) | 42 | 23 | 18% | 74% | 6% | 2% |
Table A 2-1 Participant Information for Study 1
| Female | Male | 18-25 | 26-35 | 36-45 | 46-60 | |
|---|---|---|---|---|---|---|
| Study 1a (Scale Development) | 988 | 505 | 30% | 36% | 29% | 5% |
| Study 1a (Scale Validation) | 286 | 147 | 12% | 72% | 11% | 5% |
| Study 1b (Trust-Increase Group) | 43 | 22 | 11% | 71% | 15% | 3% |
| Study 1b (Trust-Decrease Group) | 42 | 23 | 18% | 74% | 6% | 2% |
| Group | Female | Male | 18-30 | 31-40 | 41-60 |
|---|---|---|---|---|---|
| Mechanical Appearance | 142 | 98 | 45% | 49% | 6% |
| Cartoon Appearance | 140 | 100 | 45% | 48% | 7% |
| Humanlike Appearance | 170 | 70 | 35% | 60% | 5% |
Table A 2-2 Participant Information for Study 2
| Group | Female | Male | 18-30 | 31-40 | 41-60 |
|---|---|---|---|---|---|
| Mechanical Appearance | 142 | 98 | 45% | 49% | 6% |
| Cartoon Appearance | 140 | 100 | 45% | 48% | 7% |
| Humanlike Appearance | 170 | 70 | 35% | 60% | 5% |
| Female | Male | 18-25 | 26-35 | 36-45 | 46-60 | |
|---|---|---|---|---|---|---|
| Study 3a | 90 | 60 | 21% | 59% | 13% | 7% |
| Study 3b | 178 | 122 | 38% | 46% | 15% | 1% |
Table A 2-3 Participant Information for Study 3
| Female | Male | 18-25 | 26-35 | 36-45 | 46-60 | |
|---|---|---|---|---|---|---|
| Study 3a | 90 | 60 | 21% | 59% | 13% | 7% |
| Study 3b | 178 | 122 | 38% | 46% | 15% | 1% |
| Factor | Item | Factor Loading | S.E. | p |
|---|---|---|---|---|
| F1 | ITEM1 | 0.867 | 0.021 | < 0.001 |
| ITEM2 | 0.848 | 0.023 | < 0.001 | |
| ITEM3 | 0.632 | 0.044 | < 0.001 | |
| ITEM4 | 0.828 | 0.024 | < 0.001 | |
| ITEM5 | 0.772 | 0.032 | < 0.001 | |
| ITEM6 | 0.768 | 0.029 | < 0.001 | |
| ITEM7 | 0.716 | 0.041 | < 0.001 | |
| F2 | ITEM8 | 0.746 | 0.049 | < 0.001 |
| ITEM9 | 0.731 | 0.04 | < 0.001 | |
| ITEM10 | 0.731 | 0.047 | < 0.001 | |
| ITEM11 | 0.73 | 0.046 | < 0.001 | |
| ITEM12 | 0.693 | 0.05 | < 0.001 | |
| ITEM13 | 0.652 | 0.046 | < 0.001 | |
| F3 | ITEM14 | 0.411 | 0.177 | 0.021 |
| ITEM15 | 0.337 | 0.134 | 0.012 | |
| ITEM16 | 0.361 | 0.175 | 0.039 | |
| ITEM17 | 0.334 | 0.093 | < 0.001 | |
| ITEM18 | 0.427 | 0.107 | < 0.001 | |
| ITEM19 | 0.543 | 0.163 | 0.001 |
Table A3-1 Factor Loadings of Confirmatory Factor Analysis in Study 1b
| Factor | Item | Factor Loading | S.E. | p |
|---|---|---|---|---|
| F1 | ITEM1 | 0.867 | 0.021 | < 0.001 |
| ITEM2 | 0.848 | 0.023 | < 0.001 | |
| ITEM3 | 0.632 | 0.044 | < 0.001 | |
| ITEM4 | 0.828 | 0.024 | < 0.001 | |
| ITEM5 | 0.772 | 0.032 | < 0.001 | |
| ITEM6 | 0.768 | 0.029 | < 0.001 | |
| ITEM7 | 0.716 | 0.041 | < 0.001 | |
| F2 | ITEM8 | 0.746 | 0.049 | < 0.001 |
| ITEM9 | 0.731 | 0.04 | < 0.001 | |
| ITEM10 | 0.731 | 0.047 | < 0.001 | |
| ITEM11 | 0.73 | 0.046 | < 0.001 | |
| ITEM12 | 0.693 | 0.05 | < 0.001 | |
| ITEM13 | 0.652 | 0.046 | < 0.001 | |
| F3 | ITEM14 | 0.411 | 0.177 | 0.021 |
| ITEM15 | 0.337 | 0.134 | 0.012 | |
| ITEM16 | 0.361 | 0.175 | 0.039 | |
| ITEM17 | 0.334 | 0.093 | < 0.001 | |
| ITEM18 | 0.427 | 0.107 | < 0.001 | |
| ITEM19 | 0.543 | 0.163 | 0.001 |
| [1] | Abbass, H. A., Scholz, J., & Reid, D. J. (Eds.). (2018). Foundations of trusted autonomy. Springer. |
| [2] | Akalin, N., Kiselev, A., Kristoffersson, A., & Loutfi, A. (2023). A taxonomy of factors influencing perceived safety in human-robot interaction. International Journal of Social Robotics, 15(12), 1993-2004. |
| [3] | Akalin, N., Kristoffersson, A., & Loutfi, A. (2022). Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures. International Journal of Human-Computer Studies, 158, 102744. |
| [4] | Akintunde, M., Yazdanpanah, V., Fathabadi, A. S., Cirstea, C., Dastani, M., & Moreau, L. (2024, May). Actual trust in multiagent systems (Extended abstract). Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems(AAMAS 2024) (pp.2114-2116). |
| [5] | AL-Khassawneh, Y. (2022). A review of artificial intelligence in security and privacy: Research advances, applications, opportunities, and challenges. Indonesian Journal of Science and Technology, 8(1), 79-96. |
| [6] | AWE. (2024, May 20). Industry trend report from AWE 2024: The AI revolution driving innovation in industry and the maturing smart home ecosystem. AWE China Home Appliance & Consumer Electronics Expo. https://www.awe.com.cn/contents/30/16781.html |
| [7] | Bartneck, C. (2023). Godspeed questionnaire series:Translations and usage. In C. U. Krägeloh, M. Alyami, & O. N. Medvedev (Eds.), International handbook of behavioral health assessment (pp. 1-35). Springer International Publishing. |
| [8] |
Bayne, T., Seth, A. K., Massimini, M., Shepherd, J., Cleeremans, A., Fleming, S. M., … Mudrik, L. (2024). Tests for consciousness in humans and beyond. Trends in Cognitive Sciences, 28(5), 454-466.
doi: 10.1016/j.tics.2024.01.010 pmid: 38485576 |
| [9] | Bernotat, J., Eyssel, F., & Sachse, J. (2019). The (fe)male robot: How robot body shape impacts first impressions and trust towards robots. International Journal of Social Robotics, 13(3), 477-489. |
| [10] | Biermann, H., Brauner, P., & Ziefle, M. (2021). How context and design shape human-robot trust and attributions. Paladyn, Journal of Behavioral Robotics, 12( 1), 74-86. |
| [11] | Billings, D. R., Schaefer, K. E., Llorens, N., & Hancock, P. A. (2012). What is trust? Defining the construct across domains. Poster presented at the American Psychological Association Conference. Division 21, Orlando, FL, USA, August 2012. |
| [12] | Bojić, L., Stojković, I., & Jolić Marjanović, Z. (2024). Signs of consciousness in AI: Can GPT-3 tell how smart it really is? Humanities and Social Sciences Communications, 11, 1631. |
| [13] | Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 1877-1901). |
| [14] | Brühlmann, F., Petralito, S., Rieser, D. C., Aeschbach, L. F., & Opwis, K. (2020). TrustDiff: Development and validation of a semantic differential for user trust on the web. Journal of Usability Studies, 16(1), 29-48. |
| [15] | Burnett, C., Norman, T. J., & Sycara, K. (2011). Trust decision-making in multi-agent systems. Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11) (pp.115-120). AAAI Press. |
| [16] | Cagiltay, B., & Mutlu, B. (2024, March). Toward family-robot interactions: A family-centered framework in HRI. Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 76-85). ACM. |
| [17] | Caine, K., Šabanovic, S., & Carter, M. (2012). The effect of monitoring by cameras and robots on the privacy enhancing behaviors of older adults. Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (pp. 343-350). ACM. |
| [18] |
Campbell, J. I., & Thompson, V. A. (2012). MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behavior Research Methods, 44, 1255-1265.
doi: 10.3758/s13428-012-0186-0 pmid: 22437511 |
| [19] | Che, M., Lum, K. M., & Wong, Y. D. (2021). Users’ attitudes on electric scooter riding speed on shared footpath: A virtual reality study. International Journal of Sustainable Transportation, 15(2), 152-161. |
| [20] | de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459-478. |
| [21] | Demszky, D., Yang, D., Yeager, D. S., Bryan, C. J., Clapper, M., Chandhok, S., … Pennebaker, J. W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688-701. |
| [22] | Dikmen, M., & Burns, C. (2017). Trust in autonomous vehicles: The case of Tesla autopilot and summon. 2017 IEEE International conference on systems, man, and cybernetics (SMC) (pp. 1093-1098). IEEE. |
| [23] |
Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, 27(7), 597-600.
doi: 10.1016/j.tics.2023.04.008 pmid: 37173156 |
| [24] | Fernandes, F. E., Yang, G., Do, H. M., & Sheng, W. (2016, August). Detection of privacy-sensitive situations for social robots in smart homes. 2016 IEEE International Conference on Automation Science and Engineering (CASE) (pp. 727-732). IEEE. |
| [25] | Ferrari, F., Paladino, M. P., & Jetten, J. (2016). Blurring human-machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302. |
| [26] | Gompei, T., & Umemuro, H. (2018). Factors and development of cognitive and affective trust on social robots. Social Robotics: 10th International Conference, ICSR 2018, Qingdao, China, November 28-30. |
| [27] |
Gorsuch, R. L. (1997). Exploratory factor analysis: Its role in item analysis. Journal of Personality Assessment, 68(3), 532-560.
doi: 10.1207/s15327752jpa6803_5 pmid: 16372866 |
| [28] |
Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109.
doi: 10.1126/science.adi1778 pmid: 37319216 |
| [29] | Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169. |
| [30] | Hamid, O. H. (2023). ChatGPT and the Chinese room argument: An eloquent AI conversationalist lacking true understanding and consciousness. 2023 9th International Conference on Information Technology Trends (ITT) (pp. 238-241). |
| [31] |
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
pmid: 22046724 |
| [32] | Hertzog, M. A. (2008). Considerations in determining sample size for pilot studies. Research in Nursing & Health, 31(2), 180-191. |
| [33] | Ho, C.-C., & MacDorman, K. F. (2017). Measuring the uncanny valley effect. International Journal of Social Robotics, 9(1), 129-139. |
| [34] |
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.
doi: 10.1177/0018720814547570 pmid: 25875432 |
| [35] | Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. |
| [36] | Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. |
| [37] | Kao, Y. H., & Wang, W. J. (2015, July). Design and implementation of a family robot. 2015 12th International Joint Conference on Computer Science and Software Engineering (pp. 251-256). IEEE. |
| [38] | King, E., Yu, H., Lee, S., & Julien, C. (2023). Get ready for a party: Exploring smarter smart spaces with help from large language models. arXiv:2303.14143 |
| [39] | Klein, R. (2007). Internet-based patient-physician electronic communication applications: Patient acceptance and trust. E-Service Journal, 5(2), 27-52. |
| [40] | Kundu, S. (2023). Measuring trustworthiness is crucial for medical AI tools. Nature Humman Behaviour, 7(11), 1812-1813. |
| [41] | Lee, I. (2021). Service robots: A systematic literature review. Electronics, 10(21), 2658. |
| [42] |
Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243-1270.
doi: 10.1080/00140139208967392 pmid: 1516577 |
| [43] |
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.
doi: 10.1518/hfes.46.1.50_30392 pmid: 15151155 |
| [44] | Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognitive Systems Research, 72, 33-49. |
| [45] | Leyzberg, D., Spaulding, S., & Scassellati, B. (2014, March). Personalizing robot tutors to individuals' learning differences. Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (pp. 423-430). |
| [46] | Li, C., & Qi, Y. (2025). Toward accurate psychological simulations: Investigating LLMs’ responses to personality and cultural variables. Computers in Human Behavior, 170, 108687. |
| [47] | Li, Y., Huang, Y., Lin, Y., Wu, S., Wan, Y., & Sun, L. (2024). I think, therefore I am: Benchmarking awareness of large language models Using AwareBench. arXiv:2401.17882 |
| [48] | Lin, P.-H., & Chen, W.-H. (2022). Factors That influence consumers’ sustainable apparel purchase intention: The moderating effect of generational cohorts. Sustainability, 14(14), 8950. |
| [49] | Liu, Y., Li, S., Liu, Y., Wang, Y., Ren, S., Li, L., … Hou, L. (2024). TempCompass: Do video LLMs really understand videos? arXiv:2403.00476 |
| [50] | Ma, Y., Li, S., Qin, S., & Qi, Y. (2020). Factors affecting trust in the autonomous vehicle: A survey of primary school students and parent perceptions. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) (TrustCom), (pp.2020-2027). |
| [51] | Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301. |
| [52] | Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S. Nam & J. B. Lyons (Eds.), Trust in Human-Robot Interaction (pp. 3-25). Elsevier Academic Press. |
| [53] | Marcu, G., Lin, I., Williams, B., Robert, L. P., & Schaub, F. (2023). “Would I feel more secure with a robot?”: Understanding perceptions of security robots in public spaces. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), 322:1-322:34. |
| [54] | Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320-341. |
| [55] | Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. |
| [56] | Mei, Q., Xie, Y., Yuan, W., & Jackson, M. O. (2024). A Turing test of whether AI chatbots are behaviorally similar to humans. Proceedings of the National Academy of Sciences, 121(9), e2313925121. |
| [57] | Meng, J. (2024). AI emerges as the frontier in behavioral science. Proceedings of the National Academy of Sciences, 121(10), e2401336121. |
| [58] | Miao, R., Jia, Q., Sun, F., Chen, G., & Huang, H. (2024). Hierarchical understanding in robotic manipulation: A knowledge-based framework. Actuators, 13(1), 28. |
| [59] | Milliez, G. (2018). Buddy:A companion robot for the whole family. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 40. |
| [60] | Mori, M. (1970). Bukimi no tani (the uncanny valley). Energy, 7(4), 33-35. |
| [61] | Mou, X., Ding, X., He, Q., Wang, L., Liang, J., Zhang, X., Sun, L., Lin, J., Zhou, J., Huang, X., & Wei, Z. (2024). From individual to society: A survey on social simulation driven by large language model-based agents. arXiv:2412.03563 |
| [62] |
Muir, B. M., & Moray, N. (1996). Trust in automation: II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429-460.
pmid: 8849495 |
| [63] | Nass, C., Moon, Y., Fogg, B. J., Reeves, B., & Dryer, D. C. (1995). Can computer personalities be human personalities? International Journal of Human-Computer Studies, 43(2), 223-239. |
| [64] | Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 33-42). |
| [65] | Nawaz, N. (2019). Robotic process automation for recruitment process. International Journal of Advanced Research in Engineering & Technology, 10(2), 608-611. |
| [66] | Nilsson, N. J. (1997). Artificial intelligence: A new synthesis. Morgan Kaufmann. |
| [67] |
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903.
doi: 10.1037/0021-9010.88.5.879 pmid: 14516251 |
| [68] | Prakash, A., Kemp, C. C., & Rogers, W. A. (2014, March). Older adults' reactions to a robot's appearance in the context of home use. Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (pp. 268-269). |
| [69] | Prassler, E., Munich, M. E., Pirjanian, P., & Kosuge, K. (2016). Domestic robotics. In B. Siciliano & O. Khatib (Eds.), Springer handbook of robotics (pp. 1729-1758). Springer International Publishing. |
| [70] |
Qi, Y., Chen, J., Qin, S., & Du, F. (2024). Human-AI mutual trust in the era of artificial general intelligence. Advances in Psychological Science, 32(12), 2124-2136.
doi: 10.3724/SP.J.1042.2024.02124 |
| [71] | Ramchurn, S. D., Huynh, D., & Jennings, N. R. (2004). Trust in multi-agent systems. The Knowledge Engineering Review, 19(1), 1-25. |
| [72] | Rane, P., Mhatre, V., & Kurup, L. (2014). Study of a home robot: JIBO. International Journal of Engineering Research & Technology, 3(10), 490-493. |
| [73] | Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425-436. |
| [74] |
Sanders, T., Kaplan, A., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors, 61(4), 614-626.
doi: 10.1177/0018720818816838 pmid: 30601683 |
| [75] | Sartori, G., & Orrù, G. (2023). Language models and psychological sciences. Frontiers in Psychology, 14, 1279317. |
| [76] | Schaefer, K. E. (2016). Measuring trust in human robot interactions:Development of the “Trust Perception Scale-HRI”. In Mittu, R., Sofge, D., Wagner, A., Lawless, W. (Eds.), Robust Intelligence and Trust in Autonomous Systems (pp. 191-218). Springer, Boston, MA. |
| [77] | Schulz, T., & Herstad, J. (2017). Walking away from the robot: Negotiating privacy with a robot. Proceedings of the 31st International BCS Human Computer Interaction Conference (pp.1-6). ACM. |
| [78] | Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 623(7987), 493-498. |
| [79] | Shao, Y., Li, L., Dai, J., & Qiu, X. (2023). Character-LLM:A trainable agent for role-playing. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 13153-13187). |
| [80] | Söderlund, M. (2023). Service robots and artificial morality: An examination of robot behavior that violates human privacy. Journal of Service Theory and Practice, 33(7), 52-72. |
| [81] | Song, B., Zhu, Q., & Luo, J. (2024). Human-AI collaboration by design. Proceedings of the Design Society, 4, 2247-2256. |
| [82] | Srinivasan, S. S., Alshareef, A., Hwang, A. V., Kang, Z., Kuosmanen, J., Ishida, K.,... Traverso, G. (2022). RoboCap: Robotic mucus-clearing capsule for enhanced drug delivery in the gastrointestinal tract. Science Robotics, 7(70), eabp9066. |
| [83] |
Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research, 25(2), 173-180.
doi: 10.1207/s15327906mbr2502_4 pmid: 26794479 |
| [84] | Sun, X., Zhang, Y., Hou, L., Zhou, W., & Zhang, S. (2020). Review on artificial intelligence products and service system. Packaging Engineering, 41(10), 49-61. |
| [85] | Sundar, S. S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer, networker or independent social actor? Communication Research, 27(6), 683-703. |
| [86] | Sviestins, E., Mitsunaga, N., Kanda, T., Ishiguro, H., & Hagita, N. (2007). Speed adaptation for a robot walking with a human. Proceedings of the ACM/IEEE international conference on Human-robot interaction (pp. 349-356). |
| [87] | Torre, I., Carrigan, E., McDonnell, R., Domijan, K., McCabe, K., & Harte, N. (2019, October). The effect of multimodal emotional expression and agent appearance on trust in human-agent interaction. Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games (pp. 1-6). |
| [88] | Tsui, K. M., Desai, M., & Yanco, H. A. (2010, March). Considering the bystander's perspective for indirect human-robot interaction. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). (pp. 129-130). |
| [89] | Walters, M. L., Koay, K. L., Syrdal, D. S., Dautenhahn, K., & Te Boekhorst, R. (2009). Preferences and perceptions of robot appearance and embodiment in human-robot interaction trials. Procs of New Frontiers in Human-Robot Interaction: Symposium at AISB09 Convention. (pp136-143) |
| [90] | Wan, J., Tang, S., Yan, H., Li, D., Wang, S., & Vasilakos, A. V. (2016). Cloud robotics: Current status and open issues. IEEE Access, 4, 2797-2807. |
| [91] |
Wang, C., Chen, W. C., Huang, L., Hou, S. Y., & Wang, Y. W. (2024). Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis. Acta Psychologica Sinica, 56(2), 194-209.
doi: 10.3724/SP.J.1041.2024.00194 |
| [92] | Wang, K., Wu, J., Sun, Y., Chen, J., Pu, Y., & Qi, Y. (2024). Trust in human and virtual live streamers: The role of integrity and social presence. International Journal of Human-Computer Interaction, 40(23), 8274-8294. |
| [93] | Webb, T., Holyoak, K. J., & Lu, H. (2023). Emergent analogical reasoning in large language models. Nature Human Behaviour, 7(9), 1526-1541. |
| [94] | Xie, C., Chen, C., Jia, F., Ye, Z., Shu, K., Bibi, A., … Li, G. (2024). Can large language model agents simulate human trust behaviors? arXiv:2402.04559 |
| [95] |
Xie, Y., & Zhou, R. (2025). The bidirectional trust in the context of new human-machine relationships. Advances in Psychological Science, 33(6), 916-932.
doi: 10.3724/SP.J.1042.2025.0916 |
| [96] | Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., Sun, L., & Han, X. (2024). AI for social science and social science of AI: A survey. Information Processing and Management, 61(3), 103665. |
| [97] |
Xu, W., Gao, Z. F., & Ge, L. Z. (2024). New research paradigms and agenda of human factors science in the intelligence era. Acta Psychologica Sinica, 56(3), 363-382.
doi: 10.3724/SP.J.1041.2024.00363 |
| [98] |
Xu, W., & Ge, L. Z. (2020). Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 28(9), 1409-1425.
doi: 10.3724/SP.J.1042.2020.01409 |
| [99] | Yonekura, H., Tanaka, F., Mizumoto, T., & Yamaguchi, H. (2024). Generating human daily activities with LLM for smart home simulator agents. 2024 International Conference on Intelligent Environments (IE), (pp. 93-96). |
| [100] | You, S., & Robert Jr, L. P. (2018, February). Human-robot similarity and willingness to work with a robotic co-worker. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 251-260). |
| [101] | Zacharaki, A., Kostavelis, I., Gasteratos, A., & Dokas, I. (2020). Safety bounds in human robot interaction: A survey. Safety Science, 127, 104667. |
| [102] | Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139, 107536. |
| [103] | Zhang, J., Li, S., Zhang, J., Du, F., Qi, Y., & Liu, X. (2020). A literature review of the research on the uncanny valley. In: Rau, P. L. (Ed.), Cross-cultural design. User experience of products, services, and intelligent environments (Lecture notes in computer science, Vol 12192). Springer. |
| [104] | Zou, H., Wang, P., Yan, Z., Sun, T., & Xiao, Z. (2024). Can LLM “self-report”? Evaluating the validity of self-report scales in measuring personality design in LLM-based Chatbots. arXiv:2412.00207 |
| [1] | TANG Xiaofei, WANG Changmei, SUN Xiaodong, CHANG En-Chung. Impact of trusting humanoid intelligent robots on employees’ job dedication intentions: An investigation based on the classification of human−robot trust [J]. Acta Psychologica Sinica, 2025, 57(11): 1933-1950. |
| [2] | ZHAN Peida; BIAN Yufang; WANG Lijun. Factors affecting the classification accuracy of reparametrized diagnostic classification models for expert-defined polytomous attributes [J]. Acta Psychologica Sinica, 2016, 48(3): 318-330. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||