Advances in Psychological Science ›› 2024, Vol. 32 ›› Issue (12): 2124-2136.doi: 10.3724/SP.J.1042.2024.02124
• Regular Articles • Previous Articles
QI Yue1,2(), CHEN Junting1,2, QIN Shaotian1,2, DU Feng3,4()
Received:
2024-01-29
Online:
2024-12-15
Published:
2024-09-24
Contact:
QI Yue, DU Feng
E-mail:qiy@ruc.edu.cn;duf@psych.ac.cn
CLC Number:
QI Yue, CHEN Junting, QIN Shaotian, DU Feng. Human-AI mutual trust in the era of artificial general intelligence[J]. Advances in Psychological Science, 2024, 32(12): 2124-2136.
[1] |
高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. (2021). 自动驾驶车中的人机信任. 心理科学进展, 29(12), 2172-2183.
doi: 10.3724/SP.J.1042.2021.02172 |
[2] | 何积丰. (2019). 安全可信人工智能. 信息安全与通信保密, 10, 5-8. |
[3] |
许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点. 心理学报, 56(3), 363-382.
doi: 10.3724/SP.J.1041.2024.00363 |
[4] |
许为, 葛列众. (2020). 智能时代的工程心理学. 心理科学进展, 28(9), 1409-1425.
doi: 10.3724/SP.J.1042.2020.01409 |
[5] | 闫宏秀. (2019). 用信任解码人工智能伦理. 人工智能, (4), 7. |
[6] |
赵竞, 孙晓军, 周宗奎, 魏华, 牛更枫. (2013). 网络交往中的人际信任. 心理科学进展, 21(8), 1493-1501.
doi: 10.3724/SP.J.1042.2013.01493 |
[7] | Ajenaghughrure I. B., da Costa Sousa S. C., & Lamas D. (2020, June). Risk and trust in artificial intelligence technologies: A case study of autonomous vehicles. In 2020 13th International Conference on Human System Interaction (pp. 118-123), Tokyo, Japan. doi: 10.1109/HSI49210.2020.9142686 |
[8] | Akash K., Hu W.-L., Jain N., & Reid T. (2018). A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems, 8(4), 1-20. doi: 10.1145/3132743 |
[9] | Aly A., & Tapus A. (2016). Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human-robot interaction. Autonomous Robots, 40(2), 193-209. doi: 10.1007/s10514-015-9444-1 |
[10] | Atoyan H., Duquet J.-R., & Robert J.-M. (2006, April). Trust in new decision aid systems. In Proceedings of the 18th Conference on I’Interaction Homme-Machine (pp. 115-122), Montreal, Canada. doi: 10.1145/1132736.1132751 |
[11] | Bartneck C., & Forlizzi J. (2004, September). A design- centered framework for social human-robot interaction. In IEEE International Workshop on Robot & Human Interactive Communication (pp. 591-594), Kurashiki, Japan. doi: 10.1109/ROMAN.2004.1374827 |
[12] | Biddle L., & Fallah S. (2021). A novel fault detection, identification and prediction approach for autonomous vehicle controllers using SVM. Automotive Innovation, 4(3), 301-314. doi: 10.1007/s42154-021-00138-0 |
[13] |
Bigman Y. E., & Gray K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. doi: 10.1016/j.cognition.2018.08.003
pmid: 30107256 |
[14] | Billings D. R., Schaefer K. E., Llorens N., & Hancock P. A. (2012). What is trust? Defining the construct across domains. In Poster presented at the American Psychological Association Conference. Division 21, Orlando, FL, USA, August 2012. |
[15] | Bindewald J. M., Rusnock C. F., & Miller M. E. (2018). Measuring human trust behavior in human-machine teams. In Advances in Human Factors in Simulation and Modeling (Vol. 591, pp. 47-58), Los Angeles, USA. Springer International Publishing. doi: 10.1007/978-3-319-60591-3_5 |
[16] | Binz M. & Eric Schulz. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120. doi: 10.1073/pnas.2218523120 |
[17] | Bubeck S., Chandrasekaran V., Eldan R., Gehrke J., Horvitz E., Kamar E.,... Zhang Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arxiv preprint arxiv: 2303. 12712. |
[18] | Chen I.-R., Bastani F. B., & Tsao T.-W. (1995). On the reliability of AI planning software in real-time applications. IEEE Transactions on Knowledge and Data Engineering, 7(1), 4-13. doi: 10.1109/69.368522 |
[19] | Chen J. Y. C., Barnes M. J., & Harper-Sciarini M. (2011). Supervisory control of multiple robots: Human- performance issues and user-interface design. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(4), 435-454. doi: 10.1109/TSMCC.2010.2056682 |
[20] | Christoforakos L., Gallucci A., Surmava-Große T., Ullrich D., & Diefenbach S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Frontiers in Robotics and AI, 8, 640444. doi: 10.3389/frobt.2021. 640444 |
[21] | Cofta P. (2007). Trust, complexity and control: Confidence in a convergent world. John Wiley & Sons, Ltd. doi: 10.1002/9780470517857 |
[22] | de Visser E. J., Monfort S. S., McKendrick R., Smith M. A. B., McKnight P. E., Krueger F., & Parasuraman R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349. doi: 10.1037/xap0000092 |
[23] | de Vries P., Midden C., & Bouwhuis D. (2003). The effects of errors on system trust, self-confidence, and the allocation of control in route planning. International Journal of Human-Computer Studies, 58(6), 719-735. doi: 10.1016/S1071-5819(03)00039-9 |
[24] | Deutsch M. (1962). Cooperation and trust:Some theoretical notes. In JonesM.R., (Ed.), Nebraska symposium on motivation (pp. 275-320). University of Nebraska Press. |
[25] | Dikmen M., & Burns C. (2017, October). Trust in autonomous vehicles: The case of Tesla Autopilot and Summon. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 1093-1098), Banff, Canada. doi: 10.1109/SMC.2017.8122757 |
[26] | Eslami M., Rickman A., Vaccaro K., Aleyasen A., Vuong A., Karahalios K., … Sandvig C. (2015, April). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153-162), Seoul, Republic of Korea. doi: 10.1145/2702123.2702556 |
[27] |
Fiske S. T., Cuddy A. J. C., Glick P., & Xu J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878-902. doi: 10.1037/0022-3514.82.6.878
pmid: 12051578 |
[28] | Fiske S. T., Xu J., Cuddy A. C., & Glick P. (1999). (Dis)respecting versus (Dis)liking: Status and interdependence predict ambivalent stereotypes of competence and warmth. Journal of Social Issues, 55(3), 473-489. doi: 10.1111/0022-4537.00128 |
[29] | Fogg B. J., & Tseng H. (1999, May). The elements of computer credibility. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 80-87), Pittsburgh, USA. doi: 10.1145/302979.303001 |
[30] | Forcier M. B., Khoury L., & N Vézina. (2020). Liability issues for the use of artificial intelligence in health care in canada: AI and medical decision-making. Dalhousie Medical Journal, 46(2), 7-11. doi: 10.15273/dmj.Vol46No2.10140 |
[31] | French B., Duenser A., & Heathcote A. (2018). Trust in automation - A literature review. Commonwealth Scientific and Industrial Research Organisation Report, EP184082. |
[32] | Frison A.-K., Wintersberger P., Riener A., Schartmüller C., Boyle L. N., Miller E., & Weigl K. (2019, May). In UX we trust: Investigation of aesthetics and usability of driver-vehicle interfaces and their impact on the perception of automated driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-13), Glasgow, UK. doi: 10.1145/3290605. 3300374 |
[33] | Glikson E., & Woolley A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. doi: 10.5465/annals.2018.0057 |
[34] | Gockley R., Simmons R., & Forlizzi J. (2006, September). Modeling affect in socially interactive robots. In ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 558-563), Hatfield, UK. doi: 10.1109/ROMAN.2006.314448 |
[35] | Gremillion G. M., Metcalfe J. S., Marathe A. R., Paul V. J., Christensen J., Drnec K., … Atwater C. (2016). Analysis of trust in autonomy for convoy operations. In Micro and nanotechnology sensors, systems, and applications, 9836, 356-365. doi: 10.1117/12.2224009 |
[36] | Groom V., & Nass C. (2007). Can robots be teammates?: Benchmarks in human-robot teams. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 8(3), 483-500. doi: 10.1075/is.8.3.10gro |
[37] | Hancock P. A., Billings D. R., Schaefer K. E., Chen J. Y. C., de Visser E. J., & Parasuraman R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517-527. doi: 10.1177/0018720811417254 |
[38] | Hancock P. A., Nourbakhsh I., & Stewart J. (2019). On the future of transportation in an era of automated and autonomous vehicles. Proceedings of the National Academy of Sciences, 116(16), 7684-7691. doi: 10.1073/pnas.1805770115 |
[39] | Hardin R. (2002). Trust and trustworthiness. Russell Sage Foundation. |
[40] | Hoff K. A., & Bashir M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407-434. doi: 10.1177/0018720814547570 |
[41] | Ignatious H. A., & Khan M. (2022). An overview of sensors in autonomous vehicles. Procedia Computer Science, 198, 736-741. doi: 10.1016/j.procs.2021.12.315 |
[42] | Jian J. Y., Bisantz A. M., & Drury C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. doi: 10.1207/S15327566IJCE 0401_04 |
[43] | Khastgir S., Birrell S., Dhadyalla G., & Jennings P. (2017). Calibrating trust to increase the use of automated systems in a vehicle. In Advances in Human Aspects of Transportation: Proceedings of the AHFE 2016 International Conference on Human Factors in Transportation, 484, 535-546. Springer International Publishing. doi: 10.1007/978-3-319-41682-3_45 |
[44] |
Kim M., Park B. K., & Young L. (2020). The psychology of motivated versus rational impression updating. Trends in Cognitive Sciences, 24(2), 101-111. doi: 10.1016/j.tics.2019.12.001
pmid: 31917061 |
[45] | Kulms P., & Kopp S. (2018). A social cognition perspective on human-computer trust: The effect of perceived warmth and competence on trust in decision-making with computers. Frontiers in Digital Humanities, 5, 14. doi: 10.3389/fdigh.2018.00014 |
[46] |
Lee J. D., & See K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. doi: 10.1518/hfes.46.1.50_30392
pmid: 15151155 |
[47] | Lewis P. R., & Marsh S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognitive Systems Research, 72, 33-49. doi: 10.1016/j.cogsys.2021. 11.001 |
[48] | Liao T., & MacDonald E. F. (2021). Manipulating users’ trust of autonomous products with affective priming. Journal of Mechanical Design, 143(5), 051402. doi: 10.1115/1.4048640 |
[49] | Longoni C., Bonezzi A., & Morewedge C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650. doi: 10.1093/jcr/ucz013 |
[50] | Luhmann N. (1990). Technology, environment and social risk: A systems perspective. Organization & Environment, 4(3), 223-231. doi: 10.1177/108602669000400305 |
[51] | Ma Y., Li S., Qin S., & Qi Y. (2020, December). Factors affecting trust in the autonomous vehicle: A survey of primary school students and parent perceptions. In 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (pp. 2020-2027), Guangzhou, China. doi: 10.1109/TrustCom50675.2020.00277 |
[52] | Madsen M., & Gregor S. (2000, December). Measuring human-computer trust. In 11th Australasian Conference on Information Systems (Vol. 53, pp. 6-8), Brisbane, Australia. |
[53] | Mayer R. C., Davis J. H., & Schoorman F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. doi: 10.5465/amr.1995.9508080335 |
[54] | Mcknight D. H., & Chervany N. L. (1996). The meaning of trust [Technical Report]. Management Information Systems Research Center, University of Minnesota. |
[55] |
Mende-Siedlecki P., Cai Y., & Todorov A. (2013). The neural dynamics of updating person impressions. Social Cognitive and Affective Neuroscience, 8(6), 623-631. doi: 10.1093/scan/nss040
pmid: 22490923 |
[56] | Merritt S. M., & Ilgen D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in humanautomation interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50(2), 194-210. doi: 10.1518/001872008X288574 |
[57] | Mohanty S., & Vyas S. (2018). Putting it all together: Toward a human-machine collaborative ecosystem. In S. Mohanty & S. Vyas (Eds.), How to Compete in the Age of Artificial Intelligence: Implementing a collaborative human-machine strategy for your business (pp. 215-229), Apress, Berkeley, CA, USA. doi: 10.1007/978-1-4842-3808-0_11 |
[58] | Möhlmann M., & Zalmanson L. (2017, December). Hands on the wheel: Navigating algorithmic management and Uber drivers'. In Autonomy’, in proceedings of the international conference on information systems (pp. 10-13), Seoul, Republic of Korea. |
[59] | Molnar L. J., Ryan L. H., Pradhan A. K., Eby D. W., St. Louis R. M., & Zakrajsek J. S. (2018). Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving. Transportation Research Part F: Traffic Psychology and Behaviour, 58, 319-328. doi: 10.1016/j.trf.2018.06.004 |
[60] | Noah B. E., Gable T. M., Schuett J. H., & Walker B. N. (2016, October). Forecasted affect towards automated and warning safety features. In Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 123-128), Ann Arbor, USA. doi: 10.1145/3004323.3004337 |
[61] | Noah B. E., Wintersberger P., Mirnig A. G., Thakkar S., Yan F., Gable T. M., Kraus J., & McCall R. (2017, September). First workshop on trust in the age of automated driving. Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct (pp. 15-21), Oldenburg, Germany. doi: 10.1145/3131726.3131733 |
[62] | Oleson K. E., Billings D. R., Kocsis V., Chen J. Y. C., & Hancock P. A. (2011, February). Antecedents of trust in human-robot collaborations. In 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA) (pp. 175-178), Miami Beach, USA. doi: 10.1109/COGSIMA. 2011.5753439 |
[63] | Parasuraman R., & Riley V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230-253. doi: 10.1518/001872097778543886 |
[64] | Payre W., Cestac J., & Delhomme P. (2016). Fully automated driving: Impact of trust and practice on manual control recovery. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(2), 229-241. doi: 10.1177/0018720815612319 |
[65] | Perry M. (2003). Distributed cognition. In J.M.Carroll (Ed.), HCI models, theories, and frameworks: Toward a multidisciplinary science (pp. 193-223), Morgan Kaufmann. |
[66] | Rahwan I., Cebrian M., Obradovich N., Bongard J., Bonnefon J.-F., Breazeal C., … Wellman M. (2019). Machine behaviour. Nature, 568(7753), 477-486. doi: 10.1038/s41586-019-1138-y |
[67] | Raj M., & Seamans R. (2019). Primer on artificial intelligence and robotics. Journal of Organization Design, 8(1), 11. doi: 10.1186/s41469-019-0050-0 |
[68] | Robinette P., Li W., Allen R., Howard A. M., & Wagner A. R. (2016, March). Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (pp. 101-108), Christchurch, New Zealand. doi: 10.1109/HRI.2016.7451740 |
[69] | Rödel C., Stadler S., Meschtscherjakov A., & Tscheligi M. (2014, September). Towards autonomous cars: The effect of autonomy levels on acceptance and user experience. Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 1-8), Seattle, USA. doi: 10.1145/2667317. 2667330 |
[70] | Rossi A., Dautenhahn K., Koay K. L., & Walters M. L. (2018). The impact of peoples’ personal dispositions and personalities on their trust of robots in an emergency scenario. Paladyn, Journal of Behavioral Robotics, 9(1), 137-154. doi: 10.1515/pjbr-2018-0010 |
[71] | Sanders T., Oleson K. E., Billings D. R., Chen J. Y. C., & Hancock P. A. (2011). A model of human-robot trust: Theoretical model development. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 1432-1436. doi: 10.1177/1071181311551298 |
[72] | Schaefer K. E., Billings D. R., Szalma J. L., Adams J. K., Sanders T. L., Chen J. Y., & Hancock P. A. (2014). A meta-analysis of factors influencing the development of trust in automation: Implications for human-robot interaction [Technical Report]. Army Research Lab, Aberdeen Proving Ground, Maryland, Human Research Engineering Directorate. doi: 10.21236/ADA607926 |
[73] | Scopelliti M., Giuliani M. V., & Fornara F. (2005). Robots in a domestic setting: A psychological approach. Universal Access in the Information Society, 4(2), 146-155. doi: 10.1007/s10209-005-0118-1 |
[74] | Shiffrin R., & Mitchell M. (2023). Probing the psychology of AI models. Proceedings of the National Academy of Sciences, 120(10), e2300963120. |
[75] | Siau K., & Wang W. (2020). Artificial Intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management, 31(2), 74-87. doi: 10.4018/JDM.2020040105 |
[76] | Stephanidis C., Salvendy G., Antona M., Chen J. Y. C., Dong J., Duffy V. G., … Zhou J. (2019). Seven HCI grand challenges. International Journal of Human- Computer Interaction, 35(14), 1229-1269. doi: 10.1080/10447318.2019.1619259 |
[77] | Stokes C. K., Lyons J. B., Littlejohn K., Natarian J., Case E., & Speranza N. (2010, May). Accounting for the human in cyberspace: Effects of mood on trust in automation. In 2010 International Symposium on Collaborative Technologies and Systems (pp. 180-187), Chicago, USA. doi: 10.1109/CTS.2010.5478512 |
[78] | Sullins J. P. (2010). Love and sex with robots: The evolution of human-robot relationships [Book review]. Industrial Robot, 37(4), 401-402. doi: 10.1108/ir.2010.04937dae.001 |
[79] | Urban G. L., Amyx C., & Lorenzon A. (2009). Online trust: State of the art, new frontiers, and research potential. Journal of Interactive Marketing, 23(2), 179-190. doi: 10.1016/j.intmar.2009.03.001 |
[80] | van Pinxteren M. M. E., Wetzels R. W. H., Rüger J., Pluymaekers M., & Wetzels M. (2019). Trust in humanoid robots: Implications for services marketing. Journal of Services Marketing, 33(4), 507-518. doi: 10.1108/JSM-01-2018-0045 |
[81] |
Walter S., Wendt C., Böhnke J., Crawcour S., Tan J.-W., Chan A., … Traue H. C. (2014). Similarities and differences of emotions in human-machine and human-human interactions: What kind of emotions are relevant for future companion systems? Ergonomics, 57(3), 374-386. doi: 10.1080/00140139.2013.822566
pmid: 23924061 |
[82] | Wang W., & Siau K. (2019). Artificial Intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda. Journal of Database Management, 30(1), 61-79. doi: 10.4018/JDM.2019010104 |
[83] | Wikipedia contributors. (2024, January 13). Psycho-Pass. In Wikipedia, The Free Encyclopedia. from https://en.wikipedia.org/w/index.php?title=Psycho-Pass&oldid=1195338833 |
[84] | Wintersberger P., Noah B. E., Kraus J., McCall R., Mirnig A. G., Kunze A., … Walker B. N. (2018, September). Second workshop on trust in the age of automated driving. Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 56-64), Toronto, Canada. doi: 10.1145/3239092.3239099 |
[85] | Wright P., McCarthy J., & Meekison L. (2003). Making sense of experience. In BlytheM.A., OverbeekeK., MonkA. F., WrightP. C. (Eds.), Funology: From usability to enjoyment (Vol. 3, pp. 43-53), Springer Netherlands. doi: 10.1007/1-4020-2967-5_5 |
[86] | Yagoda R. E., & Gillan D. J. (2012). You want me to trust a robot? The development of a human-robot interaction trust scale. International Journal of Social Robotics, 4(3), 235-248. doi: 10.1007/s12369-012-0144-0 |
[1] | HUANG Hanjing, RAU Pei-Luen Patrick. Exploration of multi-level human-machine integration theory between elderly users and intelligent systems [J]. Advances in Psychological Science, 2025, 33(2): 223-235. |
[2] | WENG Zhigang, CHEN Xiaoxiao, ZHANG Xiaomei, ZHANG Ju. Social presence oriented toward new human-machine relationships [J]. Advances in Psychological Science, 2025, 33(1): 146-162. |
[3] | HUANG Xinyu, LI Ye. Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction [J]. Advances in Psychological Science, 2024, 32(3): 527-542. |
[4] | ZHENG Yuanxia, LIU Guoxiong, XIN Cong, CHENG Li. Judging a book by its cover: The influence of facial features on children’s trust judgments [J]. Advances in Psychological Science, 2024, 32(2): 300-317. |
[5] | LU Xiaowei, GUO Zhibin, CHENG Yu, SHEN Jie, GUI Wenjun, ZHANG Lin. Evaluation of facial trustworthiness in older adults: A positivity effect and its mechanism [J]. Advances in Psychological Science, 2023, 31(8): 1496-1503. |
[6] | ZHU Ningyi, JIANG Ning, LIU Yan. The development of employees’ feeling trusted by their supervisors [J]. Advances in Psychological Science, 2022, 30(7): 1448-1462. |
[7] | QI Yue, QIN Shaotian, WANG Kexin, CHEN Wenfeng. Regulation of facial trustworthiness evaluation: The proposal and empirical verification of the experience transfer hypothesis [J]. Advances in Psychological Science, 2022, 30(4): 715-722. |
[8] | GAO Zaifeng, LI Wenmin, LIANG Jiawen, PAN Hanxi, XU Wei, SHEN Mowei. Trust in automated vehicles [J]. Advances in Psychological Science, 2021, 29(12): 2172-2183. |
[9] | QU Jiachen, GONG Zhe. Are there sex differences in trust levels? [J]. Advances in Psychological Science, 2021, 29(12): 2236-2245. |
[10] | XU Yi, LIU Yixuan. The impact of trust in technology and trust in leadership on the adoption of new technology from employee's perspective [J]. Advances in Psychological Science, 2021, 29(10): 1711-1723. |
[11] | GONG Zhe, TANG Yujie, LIU Chang. Can trust game measure trust? [J]. Advances in Psychological Science, 2021, 29(1): 19-30. |
[12] | GAO Qinglin, ZHOU Yuan. Psychological and neural mechanisms of trust formation: A perspective from computational modeling based on the decision of investor in the trust game [J]. Advances in Psychological Science, 2021, 29(1): 178-189. |
[13] | HUANG Chongrong, HU Yu. The relationship between trust and creativity in organizations: Evidence from meta-analysis [J]. Advances in Psychological Science, 2020, 28(7): 1118-1132. |
[14] | YAN Aimin, LI Yali, XIE Julan, LI Ying. Differential responses of employees to corporate social responsibility: An interpretation based on attribution theory [J]. Advances in Psychological Science, 2020, 28(6): 1004-1014. |
[15] | LI Qinggong, WANG Zhenyan, SUN Jieyuan, SHI Yan. The influence of reputation and face trustworthiness on women’s trust judgment in car-hailing scene and the moderating effect of intuitive thinking [J]. Advances in Psychological Science, 2020, 28(5): 746-751. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||