Advances in Psychological Science ›› 2024, Vol. 32 ›› Issue (12): 2124-2136.doi: 10.3724/SP.J.1042.2024.02124
• Regular Articles • Previous Articles
QI Yue1,2, CHEN Junting1,2, QIN Shaotian1,2, DU Feng3,4
Received:
2024-01-29
Online:
2024-12-15
Published:
2024-09-24
CLC Number:
QI Yue, CHEN Junting, QIN Shaotian, DU Feng. Human-AI mutual trust in the era of artificial general intelligence[J]. Advances in Psychological Science, 2024, 32(12): 2124-2136.
[1] 高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. (2021). 自动驾驶车中的人机信任. [2] 何积丰. (2019). 安全可信人工智能. [3] 许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点. [4] 许为, 葛列众. (2020). 智能时代的工程心理学. [5] 闫宏秀. (2019). 用信任解码人工智能伦理. [6] 赵竞, 孙晓军, 周宗奎, 魏华, 牛更枫. (2013). 网络交往中的人际信任. [7] Ajenaghughrure I. B., da Costa Sousa, S. C., & Lamas D. (2020, June). Risk and trust in artificial intelligence technologies: A case study of autonomous vehicles. In [8] Akash K., Hu W.-L., Jain N., & Reid T. (2018). A classification model for sensing human trust in machines using EEG and GSR. [9] Aly, A., & Tapus, A. (2016). Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human-robot interaction. [10] Atoyan H., Duquet J.-R., & Robert J.-M. (2006, April). Trust in new decision aid systems. In [11] Bartneck, C., & Forlizzi, J. (2004, September). A design- centered framework for social human-robot interaction. In [12] Biddle, L., & Fallah, S. (2021). A novel fault detection, identification and prediction approach for autonomous vehicle controllers using SVM. [13] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. [14] Billings D. R., Schaefer K. E., Llorens N., & Hancock P. A. (2012). What is trust? Defining the construct across domains. In [15] Bindewald J. M., Rusnock C. F., & Miller, M. E. (2018). Measuring human trust behavior in human-machine teams. In Advances in Human Factors in Simulation and Modeling (Vol. 591, pp. 47-58), Los Angeles, USA. Springer International Publishing. doi: 10.1007/978-3- 319-60591-3_5 [16] Binz, M. & Eric Schulz. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120. doi: 10.1073/pnas.2218523120 [17] Bubeck S., Chandrasekaran V., Eldan R., Gehrke J., Horvitz E., Kamar E., .. Zhang Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. [18] Chen I.-R., Bastani F. B., & Tsao T.-W. (1995). On the reliability of AI planning software in real-time applications. [19] Chen J. Y. C., Barnes M. J., & Harper-Sciarini M. (2011). Supervisory control of multiple robots: Human- performance issues and user-interface design. [20] Christoforakos L., Gallucci A., Surmava-Große T., Ullrich D., & Diefenbach S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. [21] Cofta, P. (2007). [22] de Visser E. J., Monfort S. S., McKendrick R., Smith M. A. B., McKnight P. E., Krueger F., & Parasuraman R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. [23] de Vries P., Midden C., & Bouwhuis D. (2003). The effects of errors on system trust, self-confidence, and the allocation of control in route planning. [24] Deutsch, M. (1962). Cooperation and trust: Some theoretical notes. In Jones, M.R., (Ed.), Nebraska symposium on motivation (pp. 275-320). University of Nebraska Press. [25] Dikmen, M., & Burns, C. (2017, October). Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. In [26] Eslami M., Rickman A., Vaccaro K., Aleyasen A., Vuong A., Karahalios K., … Sandvig C. (2015, April). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. [27] Fiske S. T., Cuddy A. J. C., Glick P., & Xu J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. [28] Fiske S. T., Xu J., Cuddy A. C., & Glick P. (1999). (Dis)respecting versus (Dis)liking: Status and interdependence predict ambivalent stereotypes of competence and warmth. [29] Fogg, B. J., & Tseng, H. (1999, May). The elements of computer credibility. In [30] Forcier M. B., Khoury L., & N Vézina. (2020). Liability issues for the use of artificial intelligence in health care in canada: AI and medical decision-making.Dalhousie Medical Journal, 46(2), 7-11. doi: 10.15273/dmj. Vol46No2.10140 [31] French, B., Duenser, A., & Heathcote, A. (2018). Trust in automation - A literature review [32] Frison A.-K., Wintersberger P., Riener A., Schartmüller C., Boyle L. N., Miller E., & Weigl K. (2019, May). In UX we trust: Investigation of aesthetics and usability of driver-vehicle interfaces and their impact on the perception of automated driving. In [33] Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. [34] Gockley R., Simmons R., & Forlizzi J. (2006, September). Modeling affect in socially interactive robots. In [35] Gremillion G. M., Metcalfe J. S., Marathe A. R., Paul V. J., Christensen J., Drnec K., … Atwater C. (2016). Analysis of trust in autonomy for convoy operations. InMicro and nanotechnology sensors, systems, and applications, 9836, 356-365. doi: 10.1117/12.2224009 [36] Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human-robot teams. [37] Hancock P. A., Billings D. R., Schaefer K. E., Chen J. Y. C., de Visser E. J., & Parasuraman R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. [38] Hancock P. A., Nourbakhsh I., & Stewart J. (2019). On the future of transportation in an era of automated and autonomous vehicles.Proceedings of the National Academy of Sciences, 116(16), 7684-7691. doi: 10.1073/ pnas.1805770115 [39] Hardin, R. (2002). [40] Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. [41] Ignatious, H. A., & Khan, M. (2022). An overview of sensors in autonomous vehicles. [42] Jian J. Y., Bisantz A. M., & Drury C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. [43] Khastgir S., Birrell S., Dhadyalla G., & Jennings, P. (2017). Calibrating trust to increase the use of automated systems in a vehicle. In Advances in Human Aspects of Transportation: Proceedings of the AHFE 2016 International Conference on Human Factors in Transportation, 484, 535-546. Springer International Publishing. doi: 10.1007/ 978-3-319-41682-3_45 [44] Kim M., Park B. K., & Young L. (2020). The psychology of motivated versus rational impression updating. [45] Kulms, P., & Kopp, S. (2018). A social cognition perspective on human-computer trust: The effect of perceived warmth and competence on trust in decision-making with computers. [46] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. [47] Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. [48] Liao, T., & MacDonald, E. F. (2021). Manipulating users’ trust of autonomous products with affective priming. [49] Longoni C., Bonezzi A., & Morewedge C. K. (2019). Resistance to medical artificial intelligence. [50] Luhmann, N. (1990). Technology, environment and social risk: A systems perspective.Organization & Environment, 4(3), 223-231. doi: 10.1177/108602669000400305 [51] Ma Y., Li S., Qin S., & Qi Y. (2020, December). Factors affecting trust in the autonomous vehicle: A survey of primary school students and parent perceptions. In [52] Madsen, M., & Gregor, S. (2000, December). Measuring human-computer trust. In [53] Mayer R. C., Davis J. H., & Schoorman F. D. (1995). An integrative model of organizational trust. [54] Mcknight, D. H., & Chervany, N. L. (1996). [55] Mende-Siedlecki P., Cai Y., & Todorov A. (2013). The neural dynamics of updating person impressions. [56] Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in humanautomation interactions. [57] Mohanty, S., & Vyas, S. (2018). Putting it all together: Toward a human-machine collaborative ecosystem. In S. Mohanty & S. Vyas (Eds.), [58] Möhlmann, M., & Zalmanson, L. (2017, December). Hands on the wheel: Navigating algorithmic management and Uber drivers'. In [59] Molnar L. J., Ryan L. H., Pradhan A. K., Eby D. W., St. Louis R. M., & Zakrajsek J. S. (2018). Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving. [60] Noah B. E., Gable T. M., Schuett J. H., & Walker B. N. (2016, October). Forecasted affect towards automated and warning safety features. In [61] Noah B. E., Wintersberger P., Mirnig A. G., Thakkar S., Yan F., Gable T. M., Kraus J., & McCall R. (2017, September). First workshop on trust in the age of automated driving. [62] Oleson K. E., Billings D. R., Kocsis V., Chen J. Y. C., & Hancock P. A. (2011, February). Antecedents of trust in human-robot collaborations. In [63] Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. [64] Payre W., Cestac J., & Delhomme P. (2016). Fully automated driving: Impact of trust and practice on manual control recovery. [65] Perry, M. (2003). Distributed cognition. In J.M. Carroll (Ed.), [66] Rahwan I., Cebrian M., Obradovich N., Bongard J., Bonnefon J.-F., Breazeal C., … Wellman M. (2019). Machine behaviour. [67] Raj, M., & Seamans, R. (2019). Primer on artificial intelligence and robotics. [68] Robinette P., Li W., Allen R., Howard A. M., & Wagner A. R. (2016, March). Overtrust of robots in emergency evacuation scenarios. In [69] Rödel C., Stadler S., Meschtscherjakov A., & Tscheligi M. (2014, September). Towards autonomous cars: The effect of autonomy levels on acceptance and user experience. [70] Rossi A., Dautenhahn K., Koay K. L., & Walters M. L. (2018). The impact of peoples’ personal dispositions and personalities on their trust of robots in an emergency scenario. [71] Sanders T., Oleson K. E., Billings D. R., Chen J. Y. C., & Hancock P. A. (2011). A model of human-robot trust: Theoretical model development. [72] Schaefer K. E., Billings D. R., Szalma J. L., Adams J. K., Sanders T. L., Chen J. Y., & Hancock P. A. (2014). [73] Scopelliti M., Giuliani M. V., & Fornara F. (2005). Robots in a domestic setting: A psychological approach. [74] Shiffrin, R., & Mitchell, M. (2023). Probing the psychology of AI models. [75] Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) ethics: Ethics of AI and ethical AI. [76] Stephanidis C., Salvendy G., Antona M., Chen J. Y. C., Dong J., Duffy V. G., … Zhou J. (2019). Seven HCI grand challenges. [77] Stokes C. K., Lyons J. B., Littlejohn K., Natarian J., Case E., & Speranza N. (2010, May). Accounting for the human in cyberspace: Effects of mood on trust in automation. In [78] Sullins, J. P. (2010). Love and sex with robots: The evolution of human-robot relationships [Book review]. [79] Urban G. L., Amyx C., & Lorenzon A. (2009). Online trust: State of the art, new frontiers, and research potential. [80] van Pinxteren, M. M. E., Wetzels R. W. H., Rüger J., Pluymaekers M., & Wetzels M. (2019). Trust in humanoid robots: Implications for services marketing. [81] Walter S., Wendt C., Böhnke J., Crawcour S., Tan J.-W., Chan A., … Traue H. C. (2014). Similarities and differences of emotions in human-machine and human-human interactions: What kind of emotions are relevant for future companion systems? [82] Wang, W., & Siau, K. (2019). Artificial Intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda. [83] Wikipedia contributors. (2024, January 13). Psycho-Pass. In [84] Wintersberger P., Noah B. E., Kraus J., McCall R., Mirnig A. G., Kunze A., … Walker B. N. (2018, September). Second workshop on trust in the age of automated driving. [85] Wright P., McCarthy J., & Meekison L. (2003). Making sense of experience. In Blythe, M.A., Overbeeke, K., Monk, A. F., Wright, P. C. (Eds.), [86] Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a robot? The development of a human-robot interaction trust scale. |
[1] | HUANG Xinyu, LI Ye. Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction [J]. Advances in Psychological Science, 2024, 32(3): 527-542. |
[2] | ZHENG Yuanxia, LIU Guoxiong, XIN Cong, CHENG Li. Judging a book by its cover: The influence of facial features on children’s trust judgments [J]. Advances in Psychological Science, 2024, 32(2): 300-317. |
[3] | LU Xiaowei, GUO Zhibin, CHENG Yu, SHEN Jie, GUI Wenjun, ZHANG Lin. Evaluation of facial trustworthiness in older adults: A positivity effect and its mechanism [J]. Advances in Psychological Science, 2023, 31(8): 1496-1503. |
[4] | ZHU Ningyi, JIANG Ning, LIU Yan. The development of employees’ feeling trusted by their supervisors [J]. Advances in Psychological Science, 2022, 30(7): 1448-1462. |
[5] | QI Yue, QIN Shaotian, WANG Kexin, CHEN Wenfeng. Regulation of facial trustworthiness evaluation: The proposal and empirical verification of the experience transfer hypothesis [J]. Advances in Psychological Science, 2022, 30(4): 715-722. |
[6] | GAO Zaifeng, LI Wenmin, LIANG Jiawen, PAN Hanxi, XU Wei, SHEN Mowei. Trust in automated vehicles [J]. Advances in Psychological Science, 2021, 29(12): 2172-2183. |
[7] | QU Jiachen, GONG Zhe. Are there sex differences in trust levels? [J]. Advances in Psychological Science, 2021, 29(12): 2236-2245. |
[8] | XU Yi, LIU Yixuan. The impact of trust in technology and trust in leadership on the adoption of new technology from employee's perspective [J]. Advances in Psychological Science, 2021, 29(10): 1711-1723. |
[9] | GONG Zhe, TANG Yujie, LIU Chang. Can trust game measure trust? [J]. Advances in Psychological Science, 2021, 29(1): 19-30. |
[10] | GAO Qinglin, ZHOU Yuan. Psychological and neural mechanisms of trust formation: A perspective from computational modeling based on the decision of investor in the trust game [J]. Advances in Psychological Science, 2021, 29(1): 178-189. |
[11] | HUANG Chongrong, HU Yu. The relationship between trust and creativity in organizations: Evidence from meta-analysis [J]. Advances in Psychological Science, 2020, 28(7): 1118-1132. |
[12] | YAN Aimin, LI Yali, XIE Julan, LI Ying. Differential responses of employees to corporate social responsibility: An interpretation based on attribution theory [J]. Advances in Psychological Science, 2020, 28(6): 1004-1014. |
[13] | LI Qinggong, WANG Zhenyan, SUN Jieyuan, SHI Yan. The influence of reputation and face trustworthiness on women’s trust judgment in car-hailing scene and the moderating effect of intuitive thinking [J]. Advances in Psychological Science, 2020, 28(5): 746-751. |
[14] | CHEN Ying, XU Minxia, WANG Xinjian. The cognitive neural network model of trust [J]. Advances in Psychological Science, 2020, 28(5): 800-809. |
[15] | Ya-xuan Ang, Fang-fang Wang, Yong-na Li. The Positivity Effect in Facial Trustworthiness Judgment in Older Adults: Age Difference and the Effects of Cognitive Declines [J]. Advances in Psychological Science, 2019, 27(suppl.): 11-11. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||