心理学报 ›› 2025, Vol. 57 ›› Issue (11): 1951-1972.doi: 10.3724/SP.J.1041.2025.1951 cstr: 32110.14.2025.1951
由姗姗1,2, 齐玥1,2(
), 陈俊廷1,2, 骆磊1,2, 张侃3,4
收稿日期:2024-08-26
发布日期:2025-09-24
出版日期:2025-11-25
通讯作者:
齐玥, E-mail: qiy@ruc.edu.cn基金资助:
YOU Shanshan1,2, QI Yue1,2(
), CHEN JunTing1,2, LUO Lei1,2, ZHANG Kan3,4
Received:2024-08-26
Online:2025-09-24
Published:2025-11-25
摘要:
随着智能家居机器人技术的发展, 安全风险成为人机信任的新挑战。本研究提出并验证了智能家居机器人信任的新维度——安全信任。为此, 研究1编制了智能家居机器人安全信任量表, 并验证了人机信任三因子结构的稳定性和信效度。研究2和研究3, 深入分析了机器人的静态和动态特征对人类与人工智能(AI)使用者安全信任的影响。结果发现, 在静态特征上, 人们对身高较矮以及摄像头不明显的机器人安全信任水平更高; 并且机器人拟人化程度影响了人类对这些静态特征的敏感性。在动态特征上, 机器人较慢的运动速度和摄像头关闭动作提高了人类的安全信任, 同时, 不同场景下这些动态特征的影响存在差异。此外, AI与人类在安全信任上表现出一定的一致性, 但总体上AI对机器人摄像头的敏感度低于人类。本研究结果为家居机器人的设计与制造提供了重要的理论支持和实践指导。
中图分类号:
由姗姗, 齐玥, 陈俊廷, 骆磊, 张侃. (2025). 人与AI对智能家居机器人的安全信任及其影响因素. 心理学报, 57(11), 1951-1972.
YOU Shanshan, QI Yue, CHEN JunTing, LUO Lei, ZHANG Kan. (2025). Safety trust in intelligent domestic robots: Human and AI perspectives on trust and relevant influencing factors. Acta Psychologica Sinica, 57(11), 1951-1972.
| 因变量 | 增加信任 | 降低信任 | 自变量 | F | η2p | ||
|---|---|---|---|---|---|---|---|
| 前测 | 后测 | 前测 | 后测 | ||||
| 安全信任 (自编) | 4.10 (0.10) | 4.33 (0.08) | 3.77 (0.10) | 2.08 (0.08) | 改变方向 | 155.61*** | 0.55 |
| 前后测 | 2.01 | 0.02 | |||||
| 改变方向 × 前后测 | 151.61*** | 0.54 | |||||
| 关系信任 (自编) | 4.27 (0.06) | 4.41 (0.08) | 4.13 (0.06) | 2.30 (0.08) | 改变方向 | 254.01*** | 0.67 |
| 前后测 | 0.20 | <0.01 | |||||
| 改变方向 × 前后测 | 194.42*** | 0.61 | |||||
| 性能信任 (自编) | 4.43 (0.03) | 4.49 (0.10) | 4.38 (0.03) | 2.87 (0.10) | 改变方向 | 124.08*** | 0.49 |
| 前后测 | 3.23 | 0.03 | |||||
| 改变方向 × 前后测 | 109.49*** | 0.46 | |||||
| 总体信任 (自编) | 4.24 (0.07) | 4.44 (0.08) | 4.13 (0.07) | 2.16 (0.08) | 改变方向 | 252.52*** | 0.67 |
| 前后测 | 0.51 | 0.04 | |||||
| 改变方向 × 前后测 | 254.81*** | 0.67 | |||||
| 人机信任 (Jian et al., | 4.08 (0.04) | 4.17 (0.06) | 3.94 (0.04) | 2.38 (0.06) | 改变方向 | 337.18*** | 0.73 |
| 前后测 | 1.22 | 0.01 | |||||
| 改变方向 × 前后测 | 295.82*** | 0.70 | |||||
| 使用意愿 (Gursoy et al., | 4.53 (0.23) | 4.64 (0.20) | 4.44 (0.28) | 2.21 (0.82) | 改变方向 | 487.53*** | 0.80 |
| 前后测 | 8.49** | 0.06 | |||||
| 改变方向 × 前后测 | 420.84*** | 0.77 | |||||
表1 研究2b各变量描述统计及方差分析结果[M (SD)]
| 因变量 | 增加信任 | 降低信任 | 自变量 | F | η2p | ||
|---|---|---|---|---|---|---|---|
| 前测 | 后测 | 前测 | 后测 | ||||
| 安全信任 (自编) | 4.10 (0.10) | 4.33 (0.08) | 3.77 (0.10) | 2.08 (0.08) | 改变方向 | 155.61*** | 0.55 |
| 前后测 | 2.01 | 0.02 | |||||
| 改变方向 × 前后测 | 151.61*** | 0.54 | |||||
| 关系信任 (自编) | 4.27 (0.06) | 4.41 (0.08) | 4.13 (0.06) | 2.30 (0.08) | 改变方向 | 254.01*** | 0.67 |
| 前后测 | 0.20 | <0.01 | |||||
| 改变方向 × 前后测 | 194.42*** | 0.61 | |||||
| 性能信任 (自编) | 4.43 (0.03) | 4.49 (0.10) | 4.38 (0.03) | 2.87 (0.10) | 改变方向 | 124.08*** | 0.49 |
| 前后测 | 3.23 | 0.03 | |||||
| 改变方向 × 前后测 | 109.49*** | 0.46 | |||||
| 总体信任 (自编) | 4.24 (0.07) | 4.44 (0.08) | 4.13 (0.07) | 2.16 (0.08) | 改变方向 | 252.52*** | 0.67 |
| 前后测 | 0.51 | 0.04 | |||||
| 改变方向 × 前后测 | 254.81*** | 0.67 | |||||
| 人机信任 (Jian et al., | 4.08 (0.04) | 4.17 (0.06) | 3.94 (0.04) | 2.38 (0.06) | 改变方向 | 337.18*** | 0.73 |
| 前后测 | 1.22 | 0.01 | |||||
| 改变方向 × 前后测 | 295.82*** | 0.70 | |||||
| 使用意愿 (Gursoy et al., | 4.53 (0.23) | 4.64 (0.20) | 4.44 (0.28) | 2.21 (0.82) | 改变方向 | 487.53*** | 0.80 |
| 前后测 | 8.49** | 0.06 | |||||
| 改变方向 × 前后测 | 420.84*** | 0.77 | |||||
| 女 | 男 | 18~25岁 | 26~35岁 | 36~45岁 | 46~60岁 | |
|---|---|---|---|---|---|---|
| 研究1a编制阶段 | 988 | 505 | 30% | 36% | 29% | 5% |
| 研究1a验证阶段 | 286 | 147 | 12% | 72% | 11% | 5% |
| 研究1b增加信任组 | 43 | 22 | 11% | 71% | 15% | 3% |
| 研究1b降低信任组 | 42 | 23 | 18% | 74% | 6% | 2% |
附表3-1 研究1被试信息
| 女 | 男 | 18~25岁 | 26~35岁 | 36~45岁 | 46~60岁 | |
|---|---|---|---|---|---|---|
| 研究1a编制阶段 | 988 | 505 | 30% | 36% | 29% | 5% |
| 研究1a验证阶段 | 286 | 147 | 12% | 72% | 11% | 5% |
| 研究1b增加信任组 | 43 | 22 | 11% | 71% | 15% | 3% |
| 研究1b降低信任组 | 42 | 23 | 18% | 74% | 6% | 2% |
| 分组 | 女 | 男 | 18~30岁 | 31~40岁 | 41~60岁 | |
|---|---|---|---|---|---|---|
| 研究2a | 机械外观组 | 142 | 98 | 45% | 49% | 6% |
| 卡通外观组 | 140 | 100 | 45% | 48% | 7% | |
| 真人外观组 | 170 | 70 | 35% | 60% | 5% |
附表3-2 研究2被试信息
| 分组 | 女 | 男 | 18~30岁 | 31~40岁 | 41~60岁 | |
|---|---|---|---|---|---|---|
| 研究2a | 机械外观组 | 142 | 98 | 45% | 49% | 6% |
| 卡通外观组 | 140 | 100 | 45% | 48% | 7% | |
| 真人外观组 | 170 | 70 | 35% | 60% | 5% |
| 女 | 男 | 18~25岁 | 26~35岁 | 36~45岁 | 46~60岁 | |
|---|---|---|---|---|---|---|
| 研究3a | 90 | 60 | 21% | 59% | 13% | 7% |
| 研究3b | 178 | 122 | 38% | 46% | 15% | 1% |
附表3-3 研究3被试信息
| 女 | 男 | 18~25岁 | 26~35岁 | 36~45岁 | 46~60岁 | |
|---|---|---|---|---|---|---|
| 研究3a | 90 | 60 | 21% | 59% | 13% | 7% |
| 研究3b | 178 | 122 | 38% | 46% | 15% | 1% |
| 因子 | 项目 | 因子载荷 | 标准误 | p |
|---|---|---|---|---|
| F1 | ITEM1 | 0.867 | 0.021 | < 0.001 |
| ITEM2 | 0.848 | 0.023 | < 0.001 | |
| ITEM3 | 0.632 | 0.044 | < 0.001 | |
| ITEM4 | 0.828 | 0.024 | < 0.001 | |
| ITEM5 | 0.772 | 0.032 | < 0.001 | |
| ITEM6 | 0.768 | 0.029 | < 0.001 | |
| ITEM7 | 0.716 | 0.041 | < 0.001 | |
| F2 | ITEM8 | 0.746 | 0.049 | < 0.001 |
| ITEM9 | 0.731 | 0.04 | < 0.001 | |
| ITEM10 | 0.731 | 0.047 | < 0.001 | |
| ITEM11 | 0.73 | 0.046 | < 0.001 | |
| ITEM12 | 0.693 | 0.05 | < 0.001 | |
| ITEM13 | 0.652 | 0.046 | < 0.001 | |
| F3 | ITEM14 | 0.411 | 0.177 | 0.021 |
| ITEM15 | 0.337 | 0.134 | 0.012 | |
| ITEM16 | 0.361 | 0.175 | 0.039 | |
| ITEM17 | 0.334 | 0.093 | < 0.001 | |
| ITEM18 | 0.427 | 0.107 | < 0.001 | |
| ITEM19 | 0.543 | 0.163 | 0.001 |
附表4-1 研究1b验证性因子分析载荷表
| 因子 | 项目 | 因子载荷 | 标准误 | p |
|---|---|---|---|---|
| F1 | ITEM1 | 0.867 | 0.021 | < 0.001 |
| ITEM2 | 0.848 | 0.023 | < 0.001 | |
| ITEM3 | 0.632 | 0.044 | < 0.001 | |
| ITEM4 | 0.828 | 0.024 | < 0.001 | |
| ITEM5 | 0.772 | 0.032 | < 0.001 | |
| ITEM6 | 0.768 | 0.029 | < 0.001 | |
| ITEM7 | 0.716 | 0.041 | < 0.001 | |
| F2 | ITEM8 | 0.746 | 0.049 | < 0.001 |
| ITEM9 | 0.731 | 0.04 | < 0.001 | |
| ITEM10 | 0.731 | 0.047 | < 0.001 | |
| ITEM11 | 0.73 | 0.046 | < 0.001 | |
| ITEM12 | 0.693 | 0.05 | < 0.001 | |
| ITEM13 | 0.652 | 0.046 | < 0.001 | |
| F3 | ITEM14 | 0.411 | 0.177 | 0.021 |
| ITEM15 | 0.337 | 0.134 | 0.012 | |
| ITEM16 | 0.361 | 0.175 | 0.039 | |
| ITEM17 | 0.334 | 0.093 | < 0.001 | |
| ITEM18 | 0.427 | 0.107 | < 0.001 | |
| ITEM19 | 0.543 | 0.163 | 0.001 |
| [1] | Abbass, H. A., Scholz, J., & Reid, D. J. (Eds.). (2018). Foundations of trusted autonomy. Springer. |
| [2] | Akalin, N., Kiselev, A., Kristoffersson, A., & Loutfi, A. (2023). A taxonomy of factors influencing perceived safety in human-robot interaction. International Journal of Social Robotics, 15(12), 1993-2004. |
| [3] | Akalin, N., Kristoffersson, A., & Loutfi, A. (2022). Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures. International Journal of Human- Computer Studies, 158, 102744. |
| [4] | Akintunde, M., Yazdanpanah, V., Fathabadi, A. S., Cirstea, C., Dastani, M., & Moreau, L. (2024, May). Actual trust in multiagent systems (Extended abstract). Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024) (pp.2114-2116). |
| [5] | AL-Khassawneh, Y. (2022). A review of artificial intelligence in security and privacy: Research advances, applications, opportunities, and challenges. Indonesian Journal of Science and Technology, 8(1), 79-96. |
| [6] | AWE. (2024, May 20). Industry trend report from AWE 2024: The AI revolution driving innovation in industry and the maturing smart home ecosystem. AWE China Home Appliance & Consumer Electronics Expo. https://www.awe.com.cn/contents/30/16781.html |
| [AWE. (2024, May 20). AWE 2024行业趋势报告之一:AI革命下的产业创新. 智能家居生态日趋成熟. https://www.awe.com.cn/contents/30/16781.html] | |
| [7] | Bartneck,, C. (2023). Godspeed questionnaire series:Translations and usage. In C. U.Krägeloh, M.Alyami, & O. N.Medvedev (Eds.), International handbook of behavioral health assessment (pp. 1-35). Springer International Publishing. |
| [8] |
Bayne, T., Seth, A. K., Massimini, M., Shepherd, J., Cleeremans, A., Fleming, S. M., … Mudrik, L. (2024). Tests for consciousness in humans and beyond. Trends in Cognitive Sciences, 28(5), 454-466.
doi: 10.1016/j.tics.2024.01.010 pmid: 38485576 |
| [9] | Bernotat, J., Eyssel, F., & Sachse, J. (2019). The (fe)male robot: How robot body shape impacts first impressions and trust towards robots. International Journal of Social Robotics, 13(3), 477-489. |
| [10] | Biermann, H., Brauner, P., & Ziefle, M. (2021). How context and design shape human-robot trust and attributions. Paladyn, Journal of Behavioral Robotics, 12(1), 74-86. |
| [11] | Billings, D. R., Schaefer, K. E., Llorens, N., & Hancock, P. A. (2012). What is trust? Defining the construct across domains. Poster presented at the American Psychological Association Conference. Division 21, Orlando, FL, USA, August 2012. |
| [12] | Bojić, L., Stojković, I., & Jolić Marjanović, Z. (2024). Signs of consciousness in AI: Can GPT-3 tell how smart it really is? Humanities and Social Sciences Communications, 11, 1631. |
| [13] | Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 1877-1901). |
| [14] | Brühlmann, F., Petralito, S., Rieser, D. C., Aeschbach, L. F., & Opwis, K. (2020). TrustDiff: Development and validation of a semantic differential for user trust on the web. Journal of Usability Studies, 16(1), 29-48. |
| [15] | Burnett, C., Norman, T. J., & Sycara, K. (2011). Trust decision-making in multi-agent systems. Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11)(pp.115-120). AAAI Press. |
| [16] | Cagiltay, B., & Mutlu, B. (2024, March). Toward family-robot interactions: A family-centered framework in HRI. Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (pp. 76-85). ACM. |
| [17] | Caine, K., Šabanovic, S., & Carter, M. (2012). The effect of monitoring by cameras and robots on the privacy enhancing behaviors of older adults. Proceedings of the seventh annual ACM/IEEE international conference on Human- Robot Interaction (pp. 343-350). ACM. |
| [18] |
Campbell, J. I., & Thompson, V. A. (2012). MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behavior Research Methods, 44, 1255-1265.
doi: 10.3758/s13428-012-0186-0 pmid: 22437511 |
| [19] | Che, M., Lum, K. M., & Wong, Y. D. (2021). Users’ attitudes on electric scooter riding speed on shared footpath: A virtual reality study. International Journal of Sustainable Transportation, 15(2), 152-161. |
| [20] | de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459-478. |
| [21] | Demszky, D., Yang, D., Yeager, D. S., Bryan, C. J., Clapper, M., Chandhok, S., … Pennebaker, J. W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688-701. |
| [22] | Dikmen, M., & Burns, C. (2017). Trust in autonomous vehicles: The case of Tesla autopilot and summon. 2017 IEEE International conference on systems, man, and cybernetics (SMC) (pp. 1093-1098). IEEE. |
| [23] |
Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, 27(7), 597-600.
doi: 10.1016/j.tics.2023.04.008 pmid: 37173156 |
| [24] | Fernandes, F. E., Yang, G., Do, H. M., & Sheng, W. (2016, August). Detection of privacy-sensitive situations for social robots in smart homes. 2016 IEEE International Conference on Automation Science and Engineering (CASE) (pp. 727-732). IEEE. |
| [25] | Ferrari, F., Paladino, M. P., & Jetten, J. (2016). Blurring human-machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302. |
| [26] | Gompei, T., & Umemuro, H. (2018). Factors and development of cognitive and affective trust on social robots. Social Robotics: 10th International Conference, ICSR 2018, Qingdao, China, November 28-30. |
| [27] |
Gorsuch, R. L. (1997). Exploratory factor analysis: Its role in item analysis. Journal of Personality Assessment, 68(3), 532-560.
doi: 10.1207/s15327752jpa6803_5 pmid: 16372866 |
| [28] |
Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109.
doi: 10.1126/science.adi1778 pmid: 37319216 |
| [29] | Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169. |
| [30] | Hamid, O. H. (2023). ChatGPT and the Chinese room argument: An eloquent AI conversationalist lacking true understanding and consciousness. 2023 9th International Conference on Information Technology Trends (ITT) (pp. 238-241). |
| [31] |
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
pmid: 22046724 |
| [32] | Hertzog, M. A. (2008). Considerations in determining sample size for pilot studies. Research in Nursing & Health, 31(2), 180-191. |
| [33] | Ho, C.-C., & MacDorman, K. F. (2017). Measuring the uncanny valley effect. International Journal of Social Robotics, 9(1), 129-139. |
| [34] |
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.
doi: 10.1177/0018720814547570 pmid: 25875432 |
| [35] | Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. |
| [36] | Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. |
| [37] | Kao, Y. H., & Wang, W. J. (2015, July). Design and implementation of a family robot. 2015 12th International Joint Conference on Computer Science and Software Engineering (pp. 251-256). IEEE. |
| [38] | King, E., Yu, H., Lee, S., & Julien, C. (2023). Get ready for a party: Exploring smarter smart spaces with help from large language models. arXiv:2303.14143 |
| [39] | Klein, R. (2007). Internet-based patient-physician electronic communication applications: Patient acceptance and trust. E-Service Journal, 5(2), 27-52. |
| [40] | Kundu, S. (2023). Measuring trustworthiness is crucial for medical AI tools. Nature Humman Behaviour, 7(11), 1812-1813. |
| [41] | Lee, I. (2021). Service robots: A systematic literature review. Electronics, 10(21), 2658. |
| [42] |
Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243-1270.
doi: 10.1080/00140139208967392 pmid: 1516577 |
| [43] |
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.
doi: 10.1518/hfes.46.1.50_30392 pmid: 15151155 |
| [44] | Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognitive Systems Research, 72, 33-49. |
| [45] | Leyzberg, D., Spaulding, S., & Scassellati, B. (2014, March). Personalizing robot tutors to individuals' learning differences. Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (pp. 423-430). |
| [46] | Li, C., & Qi, Y. (2025). Toward accurate psychological simulations: Investigating LLMs’ responses to personality and cultural variables. Computers in Human Behavior, 170, 108687. |
| [47] | Li, Y., Huang, Y., Lin, Y., Wu, S., Wan, Y., & Sun, L. (2024). I think, therefore I am: Benchmarking awareness of large language models Using AwareBench. arXiv:2401.17882 |
| [48] | Lin, P.-H., & Chen, W.-H. (2022). Factors That influence consumers’ sustainable apparel purchase intention: The moderating effect of generational cohorts. Sustainability, 14(14), 8950. |
| [49] | Liu, Y., Li, S., Liu, Y., Wang, Y., Ren, S., Li, L., … Hou, L. (2024). TempCompass: Do video LLMs really understand videos? arXiv:2403.00476 |
| [50] | Ma, Y., Li, S., Qin, S., & Qi, Y. (2020). Factors affecting trust in the autonomous vehicle: A survey of primary school students and parent perceptions. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 1, (pp.2020-2027). |
| [51] | Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301. |
| [52] | Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S.Nam & J. B.Lyons (Eds.), Trust in Human-Robot Interaction (pp. 3-25). Elsevier Academic Press. |
| [53] | Marcu, G., Lin, I., Williams, B., Robert, L. P., & Schaub, F. (2023). “Would I feel more secure with a robot?”: Understanding perceptions of security robots in public spaces. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), 322:1-322:34. |
| [54] | Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320-341. |
| [55] | Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. |
| [56] | Mei, Q., Xie, Y., Yuan, W., & Jackson, M. O. (2024). A Turing test of whether AI chatbots are behaviorally similar to humans. Proceedings of the National Academy of Sciences, 121(9), e2313925121. |
| [57] | Meng, J. (2024). AI emerges as the frontier in behavioral science. Proceedings of the National Academy of Sciences, 121(10), e2401336121. |
| [58] | Miao, R., Jia, Q., Sun, F., Chen, G., & Huang, H. (2024). Hierarchical understanding in robotic manipulation: A knowledge-based framework. Actuators, 13(1), 28. |
| [59] | Milliez, G. (2018). Buddy:A companion robot for the whole family. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 40. |
| [60] | Mori, M. (1970). Bukimi no tani (the uncanny valley). Energy, 7(4), 33-35. |
| [61] | Mou, X., Ding, X., He, Q., Wang, L., Liang, J., Zhang, X., Sun, L., Lin, J., Zhou, J., Huang, X., & Wei, Z. (2024). From individual to society: A survey on social simulation driven by large language model-based agents. arXiv:2412.03563 |
| [62] |
Muir, B. M., & Moray, N. (1996). Trust in automation: II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429-460.
pmid: 8849495 |
| [63] | Nass, C., Moon, Y., Fogg, B. J., Reeves, B., & Dryer, D. C. (1995). Can computer personalities be human personalities? International Journal of Human-Computer Studies, 43(2), 223-239. |
| [64] | Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 33-42). |
| [65] | Nawaz, N. (2019). Robotic process automation for recruitment process. International Journal of Advanced Research in Engineering & Technology, 10(2), 608-611. |
| [66] | Nilsson, N. J. (2003). Artificial intelligence: A new synthesis. Morgan Kaufmann. |
| [67] |
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903.
doi: 10.1037/0021-9010.88.5.879 pmid: 14516251 |
| [68] | Prakash, A., Kemp, C. C., & Rogers, W. A. (2014, March). Older adults' reactions to a robot's appearance in the context of home use. Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (pp. 268-269). |
| [69] | Prassler,, E., Munich, M. E., Pirjanian, P., & Kosuge, K. (2016). Domestic robotics. In B.Siciliano & O.Khatib (Eds.), Springer handbook of robotics (pp. 1729-1758). Springer International Publishing. |
| [70] |
Qi, Y., Chen, J., Qin, S., & Du, F. (2024). Human-AI mutual trust in the era of artificial general intelligence. Advances in Psychological Science, 32(12), 2124-2136.
doi: 10.3724/SP.J.1042.2024.02124 |
|
[齐玥, 陈俊廷, 秦邵天, 杜峰. (2024). 通用人工智能时代的人与AI信任. 心理科学进展, 32(12), 2124-2136.]
doi: 10.3724/SP.J.1042.2024.02124 |
|
| [71] | Ramchurn, S. D., Huynh, D., & Jennings, N. R. (2004). Trust in multi-agent systems. The Knowledge Engineering Review, 19(1), 1-25. |
| [72] | Rane, P., Mhatre, V., & Kurup, L. (2014). Study of a home robot: JIBO. International Journal of Engineering Research & Technology, 3(10), 490-493. |
| [73] | Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425-436. |
| [74] |
Sanders, T., Kaplan, A., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors, 61(4), 614-626.
doi: 10.1177/0018720818816838 pmid: 30601683 |
| [75] | Sartori, G., & Orrù, G. (2023). Language models and psychological sciences. Frontiers in Psychology, 14, 1279317. |
| [76] | Schaefer, K. E. (2016). Measuring trust in human robot interactions:Development of the “Trust Perception Scale-HRI”. In MittuR., SofgeD., WagnerA., LawlessW. (Eds.), Robust Intelligence and Trust in Autonomous Systems (pp. 191-218). Springer, Boston, MA. |
| [77] | Schulz, T., & Herstad, J. (2017). Walking away from the robot: Negotiating privacy with a robot. Proceedings of the 31st International BCS Human Computer Interaction Conference (pp.1-6). ACM. |
| [78] | Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 623(7987), 493-498. |
| [79] | Shao, Y., Li, L., Dai, J., & Qiu, X. (2023). Character-LLM:A trainable agent for role-playing. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 13153-13187). |
| [80] | Söderlund, M. (2023). Service robots and artificial morality: An examination of robot behavior that violates human privacy. Journal of Service Theory and Practice, 33(7), 52-72. |
| [81] | Srinivasan, S. S., Alshareef, A., Hwang, A. V., Kang, Z., Kuosmanen, J., Ishida, K.,... Traverso, G. (2022). RoboCap: Robotic mucus-clearing capsule for enhanced drug delivery in the gastrointestinal tract. Science Robotics, 7(70), eabp9066. |
| [82] |
Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research, 25(2), 173-180.
doi: 10.1207/s15327906mbr2502_4 pmid: 26794479 |
| [83] | Sun, X., Zhang, Y., Hou, L., Zhou, W., & Zhang, S. (2020). Review on artificial intelligence products and service system. Packaging Engineering, 41(10), 49-61. |
| [孙效华, 张义文, 侯璐, 周雯洁, 张绳宸. (2020). 人工智能产品与服务体系研究综述. 包装工程, 41(10), 49-61.] | |
| [84] | Sundar, S. S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer, networker or independent social actor? Communication Research, 27(6), 683-703. |
| [85] | Sviestins, E., Mitsunaga, N., Kanda, T., Ishiguro, H., & Hagita, N. (2007). Speed adaptation for a robot walking with a human. Proceedings of the ACM/IEEE international conference on Human-robot interaction (pp. 349-356). |
| [86] | Torre, I., Carrigan, E., McDonnell, R., Domijan, K., McCabe, K., & Harte, N. (2019, October). The effect of multimodal emotional expression and agent appearance on trust in human-agent interaction. Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games (pp. 1-6). |
| [87] | Tsui, K. M., Desai, M., & Yanco, H. A. (2010, March). Considering the bystander's perspective for indirect human- robot interaction. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 129-130). |
| [88] | Walters, M. L., Koay, K. L., Syrdal, D. S., Dautenhahn, K., & Te Boekhorst, R. (2009). Preferences and perceptions of robot appearance and embodiment in human-robot interaction trials. Procs of New Frontiers in Human-Robot Interaction: Symposium at AISB09 Convention (pp. 136-143) |
| [89] | Wan, J., Tang, S., Yan, H., Li, D., Wang, S., & Vasilakos, A. V. (2016). Cloud robotics: Current status and open issues. IEEE Access, 4, 2797-2807. |
| [90] |
Wang, C., Chen, W. C., Huang, L., Hou, S. Y., & Wang, Y. W. (2024). Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis. Acta Psychologica Sinica, 56(2), 194-209.
doi: 10.3724/SP.J.1041.2024.00194 |
|
[王晨, 陈为聪, 黄亮, 侯苏豫, 王益文. (2024). 机器人遵从伦理促进人机信任?决策类型反转效应与人机投射假说. 心理学报, 56(2), 194-209.]
doi: 10.3724/SP.J.1041.2024.00194 |
|
| [91] | Wang, K., Wu, J., Sun, Y., Chen, J., Pu, Y., & Qi, Y. (2024). Trust in human and virtual live streamers: The role of integrity and social presence. International Journal of Human-Computer Interaction, 40(23), 8274-8294. |
| [92] | Webb, T., Holyoak, K. J., & Lu, H. (2023). Emergent analogical reasoning in large language models. Nature Human Behaviour, 7(9), 1526-1541. |
| [93] | Xie, C., Chen, C., Jia, F., Ye, Z., Shu, K., Bibi, A., … Li, G. (2024). Can large language model agents simulate human trust behaviors? arXiv:2402.04559 |
| [94] |
Xie, Y., & Zhou, R. (2025). The bidirectional trust in the context of new human-machine relationships. Advances in Psychological Science, 33(6), 916-932.
doi: 10.3724/SP.J.1042.2025.0916 |
|
[解煜彬, 周荣刚. (2025). 新型人机关系下的人机双向信任. 心理科学进展, 33(6), 916-932.]
doi: 10.3724/SP.J.1042.2025.0916 |
|
| [95] | Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R., Lin, H., Sun, L., & Han, X. (2024). AI for social science and social science of AI: A survey. Information Processing and Management, 61(3), 103665. |
| [96] |
Xu, W., Gao, Z. F., & Ge, L. Z. (2024). New research paradigms and agenda of human factors science in the intelligence era. Acta Psychologica Sinica, 56(3), 363-382.
doi: 10.3724/SP.J.1041.2024.00363 |
|
[许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点. 心理学报, 56(3), 363-382.]
doi: 10.3724/SP.J.1041.2024.00363 |
|
| [97] |
Xu, W., & Ge, L. Z. (2020). Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 28(9), 1409-1425.
doi: 10.3724/SP.J.1042.2020.01409 |
|
[许为, 葛列众. (2020). 智能时代的工程心理学. 心理科学进展, 28(9), 1409-1425.]
doi: 10.3724/SP.J.1042.2020.01409 |
|
| [98] | Yonekura, H., Tanaka, F., Mizumoto, T., & Yamaguchi, H. (2024). Generating human daily activities with LLM for smart home simulator agents. 2024 International Conference on Intelligent Environments (IE), (pp. 93-96). |
| [99] | You, S., & Robert Jr, L. P. (2018, February). Human-robot similarity and willingness to work with a robotic co-worker. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 251-260). |
| [100] | Zacharaki, A., Kostavelis, I., Gasteratos, A., & Dokas, I. (2020). Safety bounds in human robot interaction: A survey. Safety Science, 127, 104667. |
| [101] | Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139, 107536. |
| [102] | Zhang, J., Li, S., Zhang, J., Du, F., Qi, Y., & Liu, X. (2020). A literature review of the research on the uncanny valley. In: RauP. L. (Ed.), Cross-cultural design. User experience of products, services, and intelligent environments (Lecture notes in computer science, Vol 12192). Springer. |
| [103] | Zou, H., Wang, P., Yan, Z., Sun, T., & Xiao, Z. (2024). Can LLM “self-report”? Evaluating the validity of self-report scales in measuring personality design in LLM-based Chatbots. arXiv:2412.00207 |
| [1] | 章彦博, 黄峰, 莫柳铃, 刘晓倩, 朱廷劭. 基于大语言模型的自杀意念文本数据增强与识别技术[J]. 心理学报, 2025, 57(6): 987-1000. |
| [2] | 高承海, 党宝宝, 王冰洁, 吴胜涛. 人工智能的语言优势和不足:基于大语言模型与真实学生语文能力的比较[J]. 心理学报, 2025, 57(6): 947-966. |
| [3] | 焦丽颖, 李昌锦, 陈圳, 许恒彬, 许燕. 当AI“具有”人格:善恶人格角色对大语言模型道德判断的影响[J]. 心理学报, 2025, 57(6): 929-946. |
| [4] | 武月婷, 王博, 包寒吴霜, 李若男, 吴怡, 王嘉琪, 程诚, 杨丽. 人类对大语言模型的热情和能力感知[J]. 心理学报, 2025, 57(11): 2043-2059. |
| [5] | 黄峰, 丁慧敏, 李思嘉, 韩诺, 狄雅政, 刘晓倩, 赵楠, 李林妍, 朱廷劭. 基于大语言模型的自助式AI心理咨询系统构建及其效果评估[J]. 心理学报, 2025, 57(11): 2022-2042. |
| [6] | 周子森, 黄琪, 谭泽宏, 刘睿, 曹子亨, 母芳蔓, 樊亚春, 秦绍正. 多模态大语言模型动态社会互动情景下的情感能力测评[J]. 心理学报, 2025, 57(11): 1988-2000. |
| [7] | 王晨, 陈为聪, 黄亮, 侯苏豫, 王益文. 机器人遵从伦理促进人机信任?决策类型反转效应与人机投射假说[J]. 心理学报, 2024, 56(2): 194-209. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||