心理科学进展 ›› 2024, Vol. 32 ›› Issue (12): 2124-2136.doi: 10.3724/SP.J.1042.2024.02124
• 研究前沿 • 上一篇
齐玥1,2, 陈俊廷1,2, 秦邵天1,2, 杜峰3,4
收稿日期:
2024-01-29
出版日期:
2024-12-15
发布日期:
2024-09-24
通讯作者:
齐玥, E-mail: qiy@ruc.edu.cn ;杜峰, E-mail: duf@psych.ac.cn
基金资助:
QI Yue1,2, CHEN Junting1,2, QIN Shaotian1,2, DU Feng3,4
Received:
2024-01-29
Online:
2024-12-15
Published:
2024-09-24
摘要: 随着技术的发展, 通用人工智能初见雏形, 人机交互以及人机关系将进入新的时代。人与人工智能(AI)的信任关系也即将从单方向的人对AI信任逐渐转变为人与AI的互信。本研究在回顾社会心理学中的人际信任模型与工程心理学中的人机信任模型的基础上, 从人际信任视角提出了人与AI动态互信模型。该模型将人与AI视为对等的信任建立方, 结合信任与被信任方的影响因素、结果反馈和行为调整构建了人与AI动态互信的基本理论框架, 强调了人与AI信任中关系维度的“互信”与时程维度的“动态”这两个重要特征。模型首次将AI对人的信任以及二者互信的动态交互过程纳入分析, 为人与AI的信任研究提供新的理论视角。未来研究应更多关注AI对人的信任如何建立与维持、人与AI互信的量化模型以及多智能体交互中的人与AI互信。
中图分类号:
齐玥, 陈俊廷, 秦邵天, 杜峰. (2024). 通用人工智能时代的人与AI信任. 心理科学进展 , 32(12), 2124-2136.
QI Yue, CHEN Junting, QIN Shaotian, DU Feng. (2024). Human-AI mutual trust in the era of artificial general intelligence. Advances in Psychological Science, 32(12), 2124-2136.
[1] 高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. (2021). 自动驾驶车中的人机信任. [2] 何积丰. (2019). 安全可信人工智能. [3] 许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点. [4] 许为, 葛列众. (2020). 智能时代的工程心理学. [5] 闫宏秀. (2019). 用信任解码人工智能伦理. [6] 赵竞, 孙晓军, 周宗奎, 魏华, 牛更枫. (2013). 网络交往中的人际信任. [7] Ajenaghughrure I. B., da Costa Sousa, S. C., & Lamas D. (2020, June). Risk and trust in artificial intelligence technologies: A case study of autonomous vehicles. In [8] Akash K., Hu W.-L., Jain N., & Reid T. (2018). A classification model for sensing human trust in machines using EEG and GSR. [9] Aly, A., & Tapus, A. (2016). Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human-robot interaction. [10] Atoyan H., Duquet J.-R., & Robert J.-M. (2006, April). Trust in new decision aid systems. In [11] Bartneck, C., & Forlizzi, J. (2004, September). A design- centered framework for social human-robot interaction. In [12] Biddle, L., & Fallah, S. (2021). A novel fault detection, identification and prediction approach for autonomous vehicle controllers using SVM. [13] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. [14] Billings D. R., Schaefer K. E., Llorens N., & Hancock P. A. (2012). What is trust? Defining the construct across domains. In [15] Bindewald J. M., Rusnock C. F., & Miller, M. E. (2018). Measuring human trust behavior in human-machine teams. In Advances in Human Factors in Simulation and Modeling (Vol. 591, pp. 47-58), Los Angeles, USA. Springer International Publishing. doi: 10.1007/978-3- 319-60591-3_5 [16] Binz, M. & Eric Schulz. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120. doi: 10.1073/pnas.2218523120 [17] Bubeck S., Chandrasekaran V., Eldan R., Gehrke J., Horvitz E., Kamar E., .. Zhang Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. [18] Chen I.-R., Bastani F. B., & Tsao T.-W. (1995). On the reliability of AI planning software in real-time applications. [19] Chen J. Y. C., Barnes M. J., & Harper-Sciarini M. (2011). Supervisory control of multiple robots: Human- performance issues and user-interface design. [20] Christoforakos L., Gallucci A., Surmava-Große T., Ullrich D., & Diefenbach S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. [21] Cofta, P. (2007). [22] de Visser E. J., Monfort S. S., McKendrick R., Smith M. A. B., McKnight P. E., Krueger F., & Parasuraman R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. [23] de Vries P., Midden C., & Bouwhuis D. (2003). The effects of errors on system trust, self-confidence, and the allocation of control in route planning. [24] Deutsch, M. (1962). Cooperation and trust: Some theoretical notes. In Jones, M.R., (Ed.), Nebraska symposium on motivation (pp. 275-320). University of Nebraska Press. [25] Dikmen, M., & Burns, C. (2017, October). Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. In [26] Eslami M., Rickman A., Vaccaro K., Aleyasen A., Vuong A., Karahalios K., … Sandvig C. (2015, April). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. [27] Fiske S. T., Cuddy A. J. C., Glick P., & Xu J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. [28] Fiske S. T., Xu J., Cuddy A. C., & Glick P. (1999). (Dis)respecting versus (Dis)liking: Status and interdependence predict ambivalent stereotypes of competence and warmth. [29] Fogg, B. J., & Tseng, H. (1999, May). The elements of computer credibility. In [30] Forcier M. B., Khoury L., & N Vézina. (2020). Liability issues for the use of artificial intelligence in health care in canada: AI and medical decision-making.Dalhousie Medical Journal, 46(2), 7-11. doi: 10.15273/dmj. Vol46No2.10140 [31] French, B., Duenser, A., & Heathcote, A. (2018). Trust in automation - A literature review [32] Frison A.-K., Wintersberger P., Riener A., Schartmüller C., Boyle L. N., Miller E., & Weigl K. (2019, May). In UX we trust: Investigation of aesthetics and usability of driver-vehicle interfaces and their impact on the perception of automated driving. In [33] Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. [34] Gockley R., Simmons R., & Forlizzi J. (2006, September). Modeling affect in socially interactive robots. In [35] Gremillion G. M., Metcalfe J. S., Marathe A. R., Paul V. J., Christensen J., Drnec K., … Atwater C. (2016). Analysis of trust in autonomy for convoy operations. InMicro and nanotechnology sensors, systems, and applications, 9836, 356-365. doi: 10.1117/12.2224009 [36] Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human-robot teams. [37] Hancock P. A., Billings D. R., Schaefer K. E., Chen J. Y. C., de Visser E. J., & Parasuraman R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. [38] Hancock P. A., Nourbakhsh I., & Stewart J. (2019). On the future of transportation in an era of automated and autonomous vehicles.Proceedings of the National Academy of Sciences, 116(16), 7684-7691. doi: 10.1073/ pnas.1805770115 [39] Hardin, R. (2002). [40] Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. [41] Ignatious, H. A., & Khan, M. (2022). An overview of sensors in autonomous vehicles. [42] Jian J. Y., Bisantz A. M., & Drury C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. [43] Khastgir S., Birrell S., Dhadyalla G., & Jennings, P. (2017). Calibrating trust to increase the use of automated systems in a vehicle. In Advances in Human Aspects of Transportation: Proceedings of the AHFE 2016 International Conference on Human Factors in Transportation, 484, 535-546. Springer International Publishing. doi: 10.1007/ 978-3-319-41682-3_45 [44] Kim M., Park B. K., & Young L. (2020). The psychology of motivated versus rational impression updating. [45] Kulms, P., & Kopp, S. (2018). A social cognition perspective on human-computer trust: The effect of perceived warmth and competence on trust in decision-making with computers. [46] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. [47] Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. [48] Liao, T., & MacDonald, E. F. (2021). Manipulating users’ trust of autonomous products with affective priming. [49] Longoni C., Bonezzi A., & Morewedge C. K. (2019). Resistance to medical artificial intelligence. [50] Luhmann, N. (1990). Technology, environment and social risk: A systems perspective.Organization & Environment, 4(3), 223-231. doi: 10.1177/108602669000400305 [51] Ma Y., Li S., Qin S., & Qi Y. (2020, December). Factors affecting trust in the autonomous vehicle: A survey of primary school students and parent perceptions. In [52] Madsen, M., & Gregor, S. (2000, December). Measuring human-computer trust. In [53] Mayer R. C., Davis J. H., & Schoorman F. D. (1995). An integrative model of organizational trust. [54] Mcknight, D. H., & Chervany, N. L. (1996). [55] Mende-Siedlecki P., Cai Y., & Todorov A. (2013). The neural dynamics of updating person impressions. [56] Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in humanautomation interactions. [57] Mohanty, S., & Vyas, S. (2018). Putting it all together: Toward a human-machine collaborative ecosystem. In S. Mohanty & S. Vyas (Eds.), [58] Möhlmann, M., & Zalmanson, L. (2017, December). Hands on the wheel: Navigating algorithmic management and Uber drivers'. In [59] Molnar L. J., Ryan L. H., Pradhan A. K., Eby D. W., St. Louis R. M., & Zakrajsek J. S. (2018). Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving. [60] Noah B. E., Gable T. M., Schuett J. H., & Walker B. N. (2016, October). Forecasted affect towards automated and warning safety features. In [61] Noah B. E., Wintersberger P., Mirnig A. G., Thakkar S., Yan F., Gable T. M., Kraus J., & McCall R. (2017, September). First workshop on trust in the age of automated driving. [62] Oleson K. E., Billings D. R., Kocsis V., Chen J. Y. C., & Hancock P. A. (2011, February). Antecedents of trust in human-robot collaborations. In [63] Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. [64] Payre W., Cestac J., & Delhomme P. (2016). Fully automated driving: Impact of trust and practice on manual control recovery. [65] Perry, M. (2003). Distributed cognition. In J.M. Carroll (Ed.), [66] Rahwan I., Cebrian M., Obradovich N., Bongard J., Bonnefon J.-F., Breazeal C., … Wellman M. (2019). Machine behaviour. [67] Raj, M., & Seamans, R. (2019). Primer on artificial intelligence and robotics. [68] Robinette P., Li W., Allen R., Howard A. M., & Wagner A. R. (2016, March). Overtrust of robots in emergency evacuation scenarios. In [69] Rödel C., Stadler S., Meschtscherjakov A., & Tscheligi M. (2014, September). Towards autonomous cars: The effect of autonomy levels on acceptance and user experience. [70] Rossi A., Dautenhahn K., Koay K. L., & Walters M. L. (2018). The impact of peoples’ personal dispositions and personalities on their trust of robots in an emergency scenario. [71] Sanders T., Oleson K. E., Billings D. R., Chen J. Y. C., & Hancock P. A. (2011). A model of human-robot trust: Theoretical model development. [72] Schaefer K. E., Billings D. R., Szalma J. L., Adams J. K., Sanders T. L., Chen J. Y., & Hancock P. A. (2014). [73] Scopelliti M., Giuliani M. V., & Fornara F. (2005). Robots in a domestic setting: A psychological approach. [74] Shiffrin, R., & Mitchell, M. (2023). Probing the psychology of AI models. [75] Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) ethics: Ethics of AI and ethical AI. [76] Stephanidis C., Salvendy G., Antona M., Chen J. Y. C., Dong J., Duffy V. G., … Zhou J. (2019). Seven HCI grand challenges. [77] Stokes C. K., Lyons J. B., Littlejohn K., Natarian J., Case E., & Speranza N. (2010, May). Accounting for the human in cyberspace: Effects of mood on trust in automation. In [78] Sullins, J. P. (2010). Love and sex with robots: The evolution of human-robot relationships [Book review]. [79] Urban G. L., Amyx C., & Lorenzon A. (2009). Online trust: State of the art, new frontiers, and research potential. [80] van Pinxteren, M. M. E., Wetzels R. W. H., Rüger J., Pluymaekers M., & Wetzels M. (2019). Trust in humanoid robots: Implications for services marketing. [81] Walter S., Wendt C., Böhnke J., Crawcour S., Tan J.-W., Chan A., … Traue H. C. (2014). Similarities and differences of emotions in human-machine and human-human interactions: What kind of emotions are relevant for future companion systems? [82] Wang, W., & Siau, K. (2019). Artificial Intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda. [83] Wikipedia contributors. (2024, January 13). Psycho-Pass. In [84] Wintersberger P., Noah B. E., Kraus J., McCall R., Mirnig A. G., Kunze A., … Walker B. N. (2018, September). Second workshop on trust in the age of automated driving. [85] Wright P., McCarthy J., & Meekison L. (2003). Making sense of experience. In Blythe, M.A., Overbeeke, K., Monk, A. F., Wright, P. C. (Eds.), [86] Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a robot? The development of a human-robot interaction trust scale. |
[1] | 翁智刚, 陈潇潇, 张小妹, 张琚. 面向新型人机关系的社会临场感[J]. 心理科学进展, 2025, 33(1): 146-162. |
[2] | 黄心语, 李晔. 人机信任校准的双途径:信任抑制与信任提升[J]. 心理科学进展, 2024, 32(3): 527-542. |
[3] | 郑远霞, 刘国雄, 辛聪, 程黎. 以貌取人:儿童基于面孔的信任判断[J]. 心理科学进展, 2024, 32(2): 300-317. |
[4] | 陆晓伟, 郭治斌, 程雨, 沈洁, 贵文君, 张林. 老年人面孔信任评价的积极效应及其发生机制[J]. 心理科学进展, 2023, 31(8): 1496-1503. |
[5] | 朱宁奕, 江宁, 刘艳. 员工被上司信任感的形成机制[J]. 心理科学进展, 2022, 30(7): 1448-1462. |
[6] | 齐玥, 秦邵天, 王可昕, 陈文锋. 面孔可信度评价调节:经验迁移假说的提出与验证[J]. 心理科学进展, 2022, 30(4): 715-722. |
[7] | 高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. 自动驾驶车中的人机信任[J]. 心理科学进展, 2021, 29(12): 2172-2183. |
[8] | 曲佳晨, 贡喆. 信任水平存在性别差异吗?[J]. 心理科学进展, 2021, 29(12): 2236-2245. |
[9] | 徐禕, 刘艺璇. 技术信任和领导信任对企业员工新技术接受的影响[J]. 心理科学进展, 2021, 29(10): 1711-1723. |
[10] | 贡喆, 唐玉洁, 刘昌. 信任博弈范式真的能测量信任吗?[J]. 心理科学进展, 2021, 29(1): 19-30. |
[11] | 高青林, 周媛. 计算模型视角下信任形成的心理和神经机制——基于信任博弈中投资者的角度[J]. 心理科学进展, 2021, 29(1): 178-189. |
[12] | 黄崇蓉, 胡瑜. 组织内信任与创造力的关系:元分析的证据[J]. 心理科学进展, 2020, 28(7): 1118-1132. |
[13] | 颜爱民, 李亚丽, 谢菊兰, 李莹. 员工对企业社会责任的差异化反应:基于归因理论的阐释[J]. 心理科学进展, 2020, 28(6): 1004-1014. |
[14] | 李庆功, 王震炎, 孙捷元, 师妍. 网约车场景中声誉和面孔可信度对女性信任判断的影响以及直觉性思维的调节作用[J]. 心理科学进展, 2020, 28(5): 746-751. |
[15] | 陈瀛, 徐敏霞, 汪新建. 信任的认知神经网络模型[J]. 心理科学进展, 2020, 28(5): 800-809. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||