Advances in Psychological Science ›› 2025, Vol. 33 ›› Issue (6): 916-932.doi: 10.3724/SP.J.1042.2025.0916
• Academic Papers of the 27th Annual Meeting of the China Association for Science and Technology • Previous Articles Next Articles
XIE Yubin1,2, ZHOU Ronggang1,3,4(
)
Received:2024-10-12
Online:2025-06-15
Published:2025-04-09
CLC Number:
XIE Yubin, ZHOU Ronggang. The bidirectional trust in the context of new human-machine relationships[J]. Advances in Psychological Science, 2025, 33(6): 916-932.
| 要点 | 详情 |
|---|---|
| 检索时间 | 1952~2024年 |
| 数据来源 | (1) Web of Science核心集合, SCI & SSCI数据库 |
| (2) 中国知网数据库 | |
| 关键词范围 | “信任(Trust)”、“人际信任(Interpersonal Trust)”、“团队信任(Trust in Team)”、“人机信任(Human-Machine Trust)”、“人机互信(Human-Machine Mutual Trust)”、“人工智能信任(AI Trust)”、“机器人信任(Robot Trust)”、“算法信任(Trust in Algorithm)”、“信任倾向(Propensity to Trust)”、“自动化信任(Trust in automation 或 Automation Trust)”、“信任建模(Trust Modelling)” |
| 筛选依据 | (1) 将研究方向限定为心理学、管理学、社会学、计算机科学的相关方向。 |
| (2) 选用英文或中文撰写的研究论文、会议论文和评论性文章, 排除了书籍及书籍章节。 | |
| (3) 限定学科领域, 删除无关与重复文献。 | |
| (4) 进一步筛查人机信任研究相关的重点期刊后, 逐个阅读文章标题、摘要, 对引用的重要研究进行整理和筛选。 | |
| 筛选结果 | 134篇相关文献 |
| 要点 | 详情 |
|---|---|
| 检索时间 | 1952~2024年 |
| 数据来源 | (1) Web of Science核心集合, SCI & SSCI数据库 |
| (2) 中国知网数据库 | |
| 关键词范围 | “信任(Trust)”、“人际信任(Interpersonal Trust)”、“团队信任(Trust in Team)”、“人机信任(Human-Machine Trust)”、“人机互信(Human-Machine Mutual Trust)”、“人工智能信任(AI Trust)”、“机器人信任(Robot Trust)”、“算法信任(Trust in Algorithm)”、“信任倾向(Propensity to Trust)”、“自动化信任(Trust in automation 或 Automation Trust)”、“信任建模(Trust Modelling)” |
| 筛选依据 | (1) 将研究方向限定为心理学、管理学、社会学、计算机科学的相关方向。 |
| (2) 选用英文或中文撰写的研究论文、会议论文和评论性文章, 排除了书籍及书籍章节。 | |
| (3) 限定学科领域, 删除无关与重复文献。 | |
| (4) 进一步筛查人机信任研究相关的重点期刊后, 逐个阅读文章标题、摘要, 对引用的重要研究进行整理和筛选。 | |
| 筛选结果 | 134篇相关文献 |
| 研究 | 方法 | 建模数据 | 信任 动态性 | 应用场景 |
|---|---|---|---|---|
| Sadrfaridpour et al. ( | 机器学习 | 初始信任、机器人表现、人类表现 | 动态模型 | 机器人人机协作 |
| Bonneviot et al. ( | 机器学习 | 行人行为、情绪水平 | 动态模型 | 行人与车辆 |
| Li & Lee ( | 效用模型 | 情景结构、战略行为、目标 | 动态模型 | 一般人机交互 |
| Kamaraj et al. ( | 机器学习 | 驾驶风格、速度、油门刹车等控制数据 | 静态模型 | 自动驾驶 |
| Kamaraj et al. ( | 机器学习 | 个人特质:驾驶风格、风险寻求; 行为数据:油门、刹车控制等 | 静态模型 | 自动驾驶 |
| Hu et al. ( | 机器学习 | 驾驶员性格、历史经验、对系统感知 | 静态模型 | 自动驾驶 |
| Li et al. ( | 效用模型 | 能力、信任信念 | 动态模型 | 一般人机交互 |
| Li et al. ( | 机器学习 | 信任词汇、语音语调、响度 | 静态模型 | 语音对话 |
| Yi et al. ( | 机器学习 | 皮肤电、心电图、接管行为 | 动态模型 | 自动驾驶 |
| 研究 | 方法 | 建模数据 | 信任 动态性 | 应用场景 |
|---|---|---|---|---|
| Sadrfaridpour et al. ( | 机器学习 | 初始信任、机器人表现、人类表现 | 动态模型 | 机器人人机协作 |
| Bonneviot et al. ( | 机器学习 | 行人行为、情绪水平 | 动态模型 | 行人与车辆 |
| Li & Lee ( | 效用模型 | 情景结构、战略行为、目标 | 动态模型 | 一般人机交互 |
| Kamaraj et al. ( | 机器学习 | 驾驶风格、速度、油门刹车等控制数据 | 静态模型 | 自动驾驶 |
| Kamaraj et al. ( | 机器学习 | 个人特质:驾驶风格、风险寻求; 行为数据:油门、刹车控制等 | 静态模型 | 自动驾驶 |
| Hu et al. ( | 机器学习 | 驾驶员性格、历史经验、对系统感知 | 静态模型 | 自动驾驶 |
| Li et al. ( | 效用模型 | 能力、信任信念 | 动态模型 | 一般人机交互 |
| Li et al. ( | 机器学习 | 信任词汇、语音语调、响度 | 静态模型 | 语音对话 |
| Yi et al. ( | 机器学习 | 皮肤电、心电图、接管行为 | 动态模型 | 自动驾驶 |
| [1] | 陈龙. (2020). “数字控制”下的劳动秩序——外卖骑手的劳动控制研究. 社会学研究, 35(6), 113-135+244. |
| [2] |
高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. (2021). 自动驾驶车中的人机信任. 心理科学进展, 29(12), 2172-2183
doi: 10.3724/SP.J.1042.2021.02172 |
| [3] | 罗映宇, 朱国玮, 钱无忌, 吴月燕, 黄静, 杨智. (2023). 人工智能时代的算法厌恶:研究框架与未来展望. 管理世界, 39(10), 205-227 |
| [4] |
齐玥, 陈俊廷, 秦邵天, 杜峰. (2024). 通用人工智能时代的人与AI信任. 心理科学进展, 32(12), 2124-2136.
doi: 10.3724/SP.J.1042.2024.02124 |
| [5] |
许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点. 心理学报, 56(3), 363-382.
doi: 10.3724/SP.J.1041.2024.00363 |
| [6] |
许为, 葛列众. (2020). 智能时代的工程心理学. 心理科学进展, 28(9), 1409-1425.
doi: 10.3724/SP.J.1042.2020.01409 |
| [7] | Adnan N., Nordin S. M., bin Bahruddin M. A., & Ali M. (2018). How trust can drive forward the user acceptance to the technology? In-vehicle technology for autonomous vehicle. Transportation Research Part A: Policy and Practice, 118, 819-836. |
| [8] | Agreste S., De Meo P., Ferrara E., Piccolo S., & Provetti A. (2015). Trust networks: Topology, dynamics, and measurements. IEEE Internet Computing, 19(6), 26-35. |
| [9] | Alhaji B., Büttner S., Sanjay Kumar S., & Prilla M. (2024). Trust dynamics in human interaction with an industrial robot. Behaviour & Information Technology, 44(2), 266-288. https://doi.org/10.1080/0144929X.2024.2316284 |
| [10] | Allen R., & Choudhury P. (2022). Algorithm-augmented work and domain experience: The countervailing forces of ability and aversion. Organization Science, 33(1), 149-169. |
| [11] | Alsaid A., Li M., Chiou E. K., & Lee J. D. (2023). Measuring trust: A text analysis approach to compare, contrast, and select trust questionnaires. Frontiers in Psychology, 14, 1192020. |
| [12] | Ambady N., & Weisbuch M. (2010). Nonverbal behavior.In Fiske S. T., Gilbert D. T., Lindzey G. (Eds.),Handbook of social psychology (pp. 464-497). Hoboken, NJ: John Wiley & Sons. https://doi.org/10.1002/9780470561119.socpsy001013 |
| [13] | Avetisyan L., Ayoub J., Yang X. J., & Zhou F. (2024). Building contextualized trust profiles in conditionally automated driving. IEEE Transactions on Human-Machine Systems, 54(6), 658-667. |
| [14] | Azevedo-Sa H., Yang X. J., Robert L. P., & Tilbury D. M. (2021). A unified bi-directional model for natural and artificial trust in human-robot collaboration. IEEE Robotics and Automation Letters, 6(3), 5913-5920. |
| [15] | Babashahi L., Barbosa C. E., Lima Y., Lyra A., Salazar H., Argôlo M.,... Souza J. M. D. (2024). AI in the workplace: A systematic review of skill transformation in the industry. Administrative Sciences, 14(6), 127. |
| [16] | Baer M. D., Dhensa-Kahlon R. K., Colquitt J. A., Rodell J. B., Outlaw R., & Long D. M. (2015). Uneasy lies the head that bears the trust: The effects of feeling trusted on emotional exhaustion. Academy of Management Journal, 58(6), 1637-1657. |
| [17] | Baer M. D., Frank E. L., Matta F. K., Luciano M. M., & Wellman N. (2021). Undertrusted, overtrusted, or just right? The fairness of (in) congruence between trust wanted and trust received. Academy of Management Journal, 64(1), 180-206. |
| [18] | Basu C., & Singhal M. (2016, March). Trust dynamics in human autonomous vehicle interaction:A review of trust models. In 2016 AAAI Spring Symposium Series-Technical Report (pp. 85-91). Palo Alto, CA: AAAI Press. |
| [19] | Berg J., Dickhaut J., & McCabe K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122-142. |
| [20] | Bonneviot F., Coeugnet S., & Brangier E. (2021). Pedestrians-automated vehicles interaction:Toward a specific trust model. In Black, N. L., Neumann, W. P., & Noy, I. (Eds.), Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021) (Vol. 221, pp. 568-574). Springer, Cham. |
| [21] | Caldwell S., Sweetser P., O'donnell N., Knight M. J., Aitchison M., Gedeon T.,... Conroy D. (2022). An agile new research framework for hybrid human-AI teaming: Trust, transparency, and transferability. ACM Transactions on Interactive Intelligent System, 12(3), 1-36. |
| [22] |
Castelo N., Bos M. W., & Lehmann D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825.
doi: 10.1177/0022243719851788 |
| [23] |
Chancey E. T., Bliss J. P., Yamani Y., & Handley H. A. (2017). Trust and the compliance-reliance paradigm: The effects of risk, error bias, and reliability on trust and dependence. Human Factors, 59(3), 333-345.
doi: 10.1177/0018720816682648 pmid: 28430544 |
| [24] | Choi J. K., & Ji Y. G. (2015). Investigating the importance of trust on adopting an autonomous vehicle. International Journal of Human-Computer Interaction, 31(10), 692-702. |
| [25] | Chung H., Holder T., Shah J., & Yang X. J. (2024). Developing a team classification scheme for human-agent teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 68(1), 1394-1399. https://doi.org/10.1177/10711813241260387 |
| [26] |
Cuzzolin F., Morelli A., Cirstea B., & Sahakian B. J. (2020). Knowing me, knowing you: Theory of mind in AI. Psychological Medicine, 50(7), 1057-1061.
doi: 10.1017/S0033291720000835 pmid: 32375908 |
| [27] |
de Visser E. J., Pak R., & Shaw T. H. (2018). From 'automation' to 'autonomy': The importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409-1427.
doi: 10.1080/00140139.2018.1457725 pmid: 29578376 |
| [28] | Dietvorst B. J., Simmons J. P., & Massey C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. |
| [29] | Dietvorst B. J., Simmons J. P., & Massey C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170. |
| [30] | Ding Y., & Liang Z. (2018). Structural optimization and measurement of Chinese employees' perception of being trusted. In Striełkowski, W., Black, J. M., Butterfield, S. A., Chang, C. -C., Cheng, J., Dumanig, F. P., Al-Mabuk, R., Urban, M., & Webb, S. (Eds.), Proceedings of the 2018 2nd International Conference on Management, Education and Social Science (ICMESS 2018) (pp. 1392-1395). Atlantis Press. |
| [31] | Dong Y., Hu Z., Uchimura K., & Murayama N. (2010). Driver inattention monitoring system for intelligent vehicles: A review. IEEE Transactions on Intelligent Transportation Systems, 12(2), 596-614. |
| [32] | Earle T. C., & Siegrist M. (2006). Morality information, performance information, and the distinction between trust and confidence. Journal of Applied Social Psychology, 36(2), 383-416. |
| [33] |
Ebnali M., Hulme K., Ebnali-Heidari A., & Mazloumi A. (2019). How does training effect users' attitudes and skills needed for highly automated driving. Transportation Research Part F: Traffic Psychology and Behaviour, 66, 184-195.
doi: 10.1016/j.trf.2019.09.001 |
| [34] | Feng F., Bao S., Sayer J., & LeBlanc D. (2016). Spectral power analysis of drivers' gas pedal control during steady-state car-following on freeways. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 729-733. |
| [35] | Fernández A., Usamentiaga R., Carús J. L., & Casado R. (2016). Driver distraction using visual-based sensors and algorithms. Sensors, 16(11), 1805. |
| [36] | Garcia D., Kreutzer C., Badillo-Urquiola K., & Mouloua M. (2015). Measuring trust of autonomous vehicles:A development and validation study. In Stephanidis, C. (Ed.), HCI International 2015-Posters' Extended Abstracts (Vol. 529, pp. 610-615). Springer, Cham. HCI International 2015-Posters' Extended Abstracts(Vol. 529, pp. 610-615) Springer, Cham. |
| [37] | Gebru B., Zeleke L., Blankson D., Nabil M., Nateghi S., Homaifar A., & Tunstel E. (2022). A review on human- machine trust evaluation: Human-centric and machine- centric perspectives. IEEE Transactions on Human-Machine Systems, 52(5), 952-962. |
| [38] | Georganta E., & Ulfert A. S. (2024). Would you trust an AI team member? Team trust in human-AI teams. Journal of Occupational and Organizational Psychology, 97, 1212-1241. |
| [39] | Gillespie N. (2012). Measuring trust in organizational contexts:An overview of survey-based measures. In Lyon, F., Möllering, G., & Saunders, M. (Eds.), Handbook of research methods on trust(pp. 175-188).Edward Elgar Publishing. |
| [40] | Gunning D., Vorm E., Wang Y., & Turek M. (2021). DARPA's explainable AI (XAI) program: A retrospective. Applied AI Letters, 2(4), e61. https://doi.org/10.1002/ail2.61 |
| [41] |
Hancock P. A., Billings D. R., Schaefer K. E., Chen J. Y., De Visser E. J., & Parasuraman R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
pmid: 22046724 |
| [42] | He X., Nie X., Zhou R., Yang J., & Wu R. (2023). The risk-taking behavioural intentions of pilots in adverse weather conditions: An application of the theory of planned behaviour. Ergonomics, 66(8), 1043-1056. |
| [43] | Hieronymi P. (2008). The reasons of trust. Australasian Journal of Philosophy, 86(2), 213-236. |
| [44] |
Hoff K. A., & Bashir M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.
doi: 10.1177/0018720814547570 pmid: 25875432 |
| [45] | Hoffman R. R., Johnson M., Bradshaw J. M., & Underbrink A. (2013). Trust in automation. IEEE Intelligent Systems, 28(1), 84-88. |
| [46] | Holthausen B. E., Wintersberger P., Walker B. N., & Riener A. (2020). Situational Trust Scale for Automated Driving (STS-AD): Development and initial validation. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2020(pp.40-47). New York, NY: ACM. https://doi.org/10.1145/3409120.3410637 |
| [47] | Hu C., Huang S., Zhou Y., Ge S., Yi B., Zhang X., & Wu X. (2024). Dynamic and quantitative trust modeling and real-time estimation in human-machine co-driving process. Transportation Research Part F: Traffic Psychology and Behaviour, 106, 306-327. |
| [48] | Inga J., Ruess M., Robens J. H., Nelius T., Rothfuß S., Kille S.,... Kiesel A. (2023). Human-machine symbiosis: A multivariate perspective for physically coupled human- machine systems. International Journal of Human-Computer Studies, 170, 102926. |
| [49] | Jarrahi M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586. |
| [50] | Jian J. Y., Bisantz A. M., & Drury C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. |
| [51] | Johnson T., & Obradovich N. (2022). Measuring an artificial intelligence agent's trust in humans using machine incentives. arXiv preprint. https://doi.org/10.48550/arXiv.2212.13371 |
| [52] | Jorge C. C., Jonker C. M., & Tielman M. L. (2024). How should an AI trust its human teammates? Exploring possible cues of artificial trust. ACM Transactions on Interactive Intelligent Systems, 14(1), 1-26. |
| [53] | Jorge C. C., Tielman M. L., & Jonker C. M. (2022a). Artificial trust as a tool in human-AI teams. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 1155-1157). IEEE. https://doi.org/10.1109/HRI53351.2022.9889652 |
| [54] | Jorge C. C., Tielman M. L., & Jonker C. M. (2022b). Assessing artificial trust in human-agent teams: A conceptual model. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents (IVA '22) (Article 24, pp. 1-3). Association for Computing Machinery. https://doi.org/10.1145/3514197.3549696 |
| [55] | Kamaraj A. V., Lee J., Domeyer J. E., Liu S. Y., & Lee J. D. (2024). Comparing subjective similarity of automated driving styles to objective distance-based similarity. Human Factors, 66(5), 1545-1563. |
| [56] | Kamaraj A. V., Lee J., Parker J. I., Domeyer J. E., Liu S. Y., & Lee J. D. (2023). Bimodal trust:High and low trust in vehicle automation influence response to automation errors. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 67(1), 1144-1149. https://doi.org/10.1177/21695067231196244 |
| [57] | Kaplan A. D., Kessler T. T., Brill J. C., & Hancock P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337-359. |
| [58] | Kaur D., Uslu S., Durresi A., Mohler G., & Carter J. G. (2020). Trust-based human-machine collaboration mechanism for predicting crimes. In Barolli, L., Amato, F., Moscato, F., Enokido, T., & Takizawa, M. (Eds.), Advanced information networking and applications. AINA 2020 (Vol. 1151). Springer, Cham. |
| [59] | Khastgir S., Birrell S., Dhadyalla G., & Jennings P. (2017). Calibrating trust to increase the use of automated systems in a vehicle. In Stanton, N., Landry, S., Di Bucchianico, G., & Vallicelli, A. (Eds.), Advances in human aspects of transportation (Vol. 484). Springer, Cham. |
| [60] | Kintz J. R., Banerjee N. T., Zhang J. Y., Anderson A. P., & Clark T. K. (2023). Estimation of subjectively reported trust, mental workload, and situation awareness using unobtrusive measures. Human Factors, 65(6), 1142-1160. |
| [61] | Kobayashi G., Quilici-Gonzalez M. E., Broens M. C., & Quilici-Gonzalez J. A. (2016). The ethical impact of the internet of things in social relationships: Technological mediation and mutual trust. IEEE Consumer Electronics Magazine, 5(3), 85-89. |
| [62] | Kohn S. C., De Visser E. J., Wiese E., Lee Y. C., & Shaw T. H. (2021). Measurement of trust in automation: A narrative review and reference guide. Frontiers in Psychology, 12, 604977. https://doi.org/10.3389/fpsyg.2021.604977 https://doi.org/10.3389/fpsyg.2021.604977 |
| [63] | Kramer R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring questions. Annual Review of Psychology, 50(1), 569-598. |
| [64] | Lau D. C., & Lam L. W. (2008). Effects of trusting and being trusted on team citizenship behaviours in chain stores. Asian Journal of Social Psychology, 11(2), 141-149. |
| [65] | Lau D. C., Lam L. W., & Wen S. S. (2014). Examining the effects of feeling trusted by supervisors in the workplace: A self‐evaluative perspective. Journal of Organizational Behavior, 35(1), 112-127. |
| [66] | Lee J. D., Liu S. Y., Domeyer J., & DinparastDjadid A. (2021). Assessing drivers' trust of automated vehicle driving styles with a two-part mixed model of intervention tendency and magnitude. Human Factors, 63(2), 197-209. |
| [67] |
Lee J. D., & See K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.
doi: 10.1518/hfes.46.1.50_30392 pmid: 15151155 |
| [68] | Lee J. J., Knox B., & Breazeal C. (2013, March). Modeling the dynamics of nonverbal behavior on interpersonal trust for human-robot interactions. In Trust and autonomous systems: Papers from the 2013 AAAI Spring Symposium (pp. 46-47), AAAI, San Francisco, USA. |
| [69] | Li M., & Lee J. D. (2022). Modeling goal alignment in human-AI teaming:A dynamic game theory approach. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 66(1), 1538-1542. https://doi.org/10.1177/1071181322661047 |
| [70] | Li M., Erickson I. M., Cross E. V., & Lee J. D. (2024a). It's not only what you say, but also how you say it: Machine learning approach to estimate trust from conversation. Human Factors, 66(6), 1724-1741. |
| [71] | Li M., Kamaraj A. V., & Lee J. D. (2024b). Modeling trust dimensions and dynamics in human-agent conversation: A trajectory epistemic network analysis approach. International Journal of Human-Computer Interaction, 40(14), 3571-3582. |
| [72] | Lu Z., Happee R., Cabrall C. D., Kyriakidis M., & De Winter J. C. (2016). Human factors of transitions in automated driving: A general framework and literature survey. Transportation Research Part F: Traffic Psychology and Behaviour, 43, 183-198. |
| [73] | Lyons J. B., Wynne K. T., Mahoney S., & Roebke M. A. (2019). Trust and human-machine teaming:A qualitative study. In Lawless, W., Mittu, R., Sofge, D., Moskowitz, I. S., & Russell, S. (Eds.), Artificial intelligence for the Internet of everything (pp. 101-116). Academic Press. |
| [74] | Madsen M., & Gregor S. (2000, December). Measuring human-computer trust. In Proceedings of the 11th Australasian Conference on Information Systems (Vol.53, pp.6-8), Brisbane, Australia. |
| [75] | Mahmud H., Islam A. N., Ahmed S. I., & Smolander K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. |
| [76] | Mathavara K., & Ramachandran G. (2022). Role of human factors in preventing aviation accidents:An insight. In Z. A. Ali & D. Cvetković. (Eds.), Aeronautics-New advances (pp. 1-26). IntechOpen. https://doi.org/10.5772/intechopen.106899 |
| [77] | Mayer R. C., Davis J. H., & Schoorman F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. |
| [78] |
McNeese N. J., Demir M., Cooke N. J., & Myers C. (2018). Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60(2), 262-273.
doi: 10.1177/0018720817743223 pmid: 29185818 |
| [79] | Merritt S. M. (2011). Affective processes in human- automation interactions. Human Factors, 53(4), 356-370. |
| [80] |
Merritt S. M., Heimbaugh H., LaChapell J., & Lee D. (2013). I trust it, but I don't know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3), 520-534.
pmid: 23829027 |
| [81] | Merritt S. M., & Ilgen D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in human- automation interactions. Human Factors, 50(2), 194-210. |
| [82] |
Merritt S. M., Lee D., Unnerstall J. L., & Huber K. (2015). Are well-calibrated users effective users? Associations between calibration of trust and performance on an automation-aided task. Human Factors, 57(1), 34-47.
pmid: 25790569 |
| [83] | Michelaraki E., Katrakazas C., Kaiser S., Brijs T., & Yannis G. (2023). Real-time monitoring of driver distraction: State-of-the-art and future insights. Accident Analysis & Prevention, 192, 107241. |
| [84] | Möhlmann M., Zalmanson L., Henfridsson O., & Gregory R. W. (2021). Algorithmic management of work on online labor platforms: When matching meets control. MIS Quarterly, 45(4), 1999-2022. |
| [85] | Montag C., Kraus J., Baumann M., & Rozgonjuk D. (2023). The propensity to trust in (automated) technology mediates the links between technology self-efficacy and fear and acceptance of artificial intelligence. Computers in Human Behavior Reports, 11, 100315. |
| [86] | Mueller F. F., Lopes P., Strohmeier P., Ju W., Seim C., Weigel M.,... Maes P. (2020). Next steps for human-computer integration. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20) (pp.1-15). Association for Computing Machinery. |
| [87] | Muir B. M. (1987). Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies, 27(5-6), 527-539. |
| [88] | Murphy R. R. (2024). What will robots think of us? Science Robotics, 9(86), eadn6096. https://doi.org/10.1126/scirobotics.adn6096 |
| [89] | Murphy R., & Woods D. D. (2009). Beyond Asimov: The three laws of responsible robotics. IEEE Intelligent Systems, 24(4), 14-20. |
| [90] | Nies H. (2009). Key elements in effective partnership working. In Glasby, J., & Dickinson, H. (Eds.), International perspectives on health and social care: Partnership working in action (pp. 56-67). Wiley- Blackwell. |
| [91] | Olson D. M., & Xu Y. (2021). Building Trust over time in human-agent relationships. In Proceedings of the 9th International Conference on Human-Agent Interaction (pp.193-201). Association for Computing Machinery. https://doi.org/10.1145/3472307.3484178 |
| [92] | Pakdamanian E., Sheng S., Baee S., Heo S., Kraus S., & Feng L. (2021). Deeptake:Prediction of driver takeover behavior using multimodal data. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21) (Article 103, pp. 1-14). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445563 |
| [93] | Parnell K. J., Wynne R. A., Griffin T. G., Plant K. L., & Stanton N. A. (2021). Generating design requirements for flight deck applications: Applying the perceptual cycle model to engine failures on take-off. International Journal of Human-Computer Interaction, 37(7), 611-629. |
| [94] | Payre W., Cestac J., Dang N. T., Vienne F., & Delhomme P. (2017). Impact of training and in-vehicle task performance on manual control recovery in an automated car. Transportation Research Part F: Traffic Psychology and Behaviour, 46, 216-227. |
| [95] | Pitardi V., & Marriott H. R. (2021). Alexa, she's not human but… Unveiling the drivers of consumers' trust in voice‐based artificial intelligence. Psychology & Marketing, 38(4), 626-642. |
| [96] | Prahl A., Leung R. K. H., & Chua A. N. S. (2022). Fight for flight: The narratives of human versus machine following two aviation tragedies. Human-Machine Communication, 4, 27-44. |
| [97] | Qu Y., Hu H., Liu J., Zhang Z., Li Y., & Ge X. (2023). Driver state monitoring technology for conditionally automated vehicles: Review and future prospects. IEEE Transactions on Instrumentation and Measurement, 72, Article 3000920. |
| [98] | Regli C., & Annighoefer B. (2022). An anthropomorphic approach to establish an additional layer of trustworthiness of an AI pilot. In Software Engineering 2022 Workshops (pp.160-180). Gesellschaft für Informatik e.V. https://doi.org/10.18420/se2022-ws-17 |
| [99] | Reich T., Kaju A., & Maglio S. J. (2023). How to overcome algorithm aversion: Learning from mistakes. Journal of Consumer Psychology, 33(2), 285-302. |
| [100] |
Rotter J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35(4), 651-665.
doi: 10.1111/j.1467-6494.1967.tb01454.x pmid: 4865583 |
| [101] | Sadrfaridpour B., Saeidi H., Burke J., Madathil K., & Wang Y. (2016). Modeling and control of trust in human-robot collaborative manufacturing. In Mittu, R., Sofge, D., Wagner, A., & Lawless, W. (Eds.), Robust intelligence and trust in autonomous systems(pp.115-141). Springer. |
| [102] |
Salamon S. D., & Robinson S. L. (2008). Trust that binds: The impact of collective felt trust on organizational performance. Journal of Applied Psychology, 93(3), 593-601.
doi: 10.1037/0021-9010.93.3.593 pmid: 18457488 |
| [103] |
Schaefer K. E., Chen J. Y., Szalma J. L., & Hancock P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377-400.
doi: 10.1177/0018720816634228 pmid: 27005902 |
| [104] | Seet M., Harvy J., Bose R., Dragomir A., Bezerianos A., & Thakor N. (2020). Differential impact of autonomous vehicle malfunctions on human trust. IEEE Transactions on Intelligent Transportation Systems, 23(1), 548-557. |
| [105] | Shariff A., Bonnefon J. F., & Rahwan I. (2021). How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transportation Research Part C: Emerging Technologies, 126, 103069. |
| [106] | Shi Z., O'Connell A., Li Z., Liu S., Ayissi J., Hoffman G.,... Matarić M. J. (2024). Build your own robot friend: An open-source learning module for accessible and engaging AI education. In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, and Fourteenth Symposium on Educational Advances in Artificial Intelligence (AAAI'24/ IAAI'24/EAAI'24) (Article 2636, pp. 1-9). AAAI Press. https://doi.org/10.1609/aaai.v38i21.30359 |
| [107] | Simons T., Leroy H., & Nishii L. (2022). Revisiting behavioral integrity: Progress and new directions after 20 years. Annual Review of Organizational Psychology and Organizational Behavior, 9(1), 365-389. |
| [108] | Simpson J. A. (2007). Psychological foundations of trust. Current Directions in Psychological Science, 16(5), 264-268. |
| [109] | Strauch C., Mühl K., Patro K., Grabmaier C., Reithinger S., Baumann M., & Huckauf A. (2019). Real autonomous driving from a passenger's perspective: Two experimental investigations using gaze behaviour and trust ratings in field and simulator. Transportation Research Part F: Traffic Psychology and Behavior, 66, 15-28. |
| [110] | Sycara K., & Lewis M. (2004). Integrating intelligent agents into human teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 46, 413-417. |
| [111] | Techer F., Ojeda L., Barat D., Marteau J. Y., Rampillon F., Feron S., & Dogan E. (2019). Anger and highly automated driving in urban areas: The role of time pressure. Transportation Research Part F: Traffic Psychology and Behaviour, 64, 353-360. |
| [112] | Uggirala A., Gramopadhye A. K., Melloy B. J., & Toler J. E. (2004). Measurement of trust in complex and dynamic systems using a quantitative approach. International Journal of Industrial Ergonomics, 34(3), 175-186. |
| [113] | Ulfert A. S., Georganta E., Centeio Jorge C., Mehrotra S., & Tielman M. (2024). Shaping a multidisciplinary understanding of team trust in human-AI teams: A theoretical framework. European Journal of Work and Organizational Psychology, 33(2), 158-171. |
| [114] | Walliser J. C., de Visser E. J., Wiese E., & Shaw T. H. (2019). Team structure and team building improve human- machine teaming with autonomous agents. Journal of Cognitive Engineering and Decision Making, 13(4), 258-278. |
| [115] |
Walmsley S., & Gilbey A. (2017). Debiasing visual pilots' weather-related decision making. Applied Ergonomics, 65, 200-208.
doi: S0003-6870(17)30152-7 pmid: 28802440 |
| [116] | Wang J., & Moulden A. (2021). AI Trust Score: A user-centered approach to building, designing, and measuring the success of intelligent workplace features.In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Article 54, pp.1-7). Association for Computing Machinery. https://doi.org/10.1145/3411763.3443452 |
| [117] | Waytz A., Heafner J., & Epley N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117. |
| [118] | Wiggins M. W., Azar D., Hawken J., Loveday T., & Newman D. (2014). Cue-utilisation typologies and pilots' pre-flight and in-flight weather decision-making. Safety Science, 65, 118-124. |
| [119] | Wong J. H., Chiou E. K., Gutzwiller R. S., Cook M. B., & Fallon C. K. (2024). Human-artificial intelligence teaming for the U.S. Navy:Developing a holistic research roadmap. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 68(1), 380-385. https://doi.org/10.1177/10711813241260352 |
| [120] | Xie Y., Liu Y., Zhou R., Zhi X., & Chan A. H. (2024). Wait or Pass? Promoting intersection's cooperation via identifying vehicle's social behavior. Accident Analysis & Prevention, 206, 107724. |
| [121] | Xie Y., Zhou R., Chan A. H. S., Xiong D.(2025). Do you trust me? Measuring people's perception of being trusted by AI in a human-agent team. International Journal of Human-Computer Interaction. https://doi.org/10.1080/10447318.2025.2468783 |
| [122] | Xie Y., Zhou R., Chan A. H. S., Jin M., & Qu M. (2023). Motivation to interaction media: The impact of automation trust and self-determination theory on intention to use the new interaction technology in autonomous vehicles. Frontiers in Psychology, 14, 1078438. |
| [123] | Yang C., Zhu Y., & Chen Y. (2021). A review of human- machine cooperation in the robotics domain. IEEE Transactions on Human-Machine Systems, 52(1), 12-25. |
| [124] | Yi B., Cao H., Song X., Wang J., Zhao S., Guo W., & Cao D. (2024). How can the trust-change direction be measured and identified during takeover transitions in conditionally automated driving? Using physiological responses and takeover-related factors. Human Factors, 66(4), 1276-1301. |
| [125] | Yu B., Bao S., Zhang Y., Sullivan J., & Flannagan M. (2021). Measurement and prediction of driver trust in automated vehicle technologies: An application of hand position transition probability matrix. Transportation Research Part C: Emerging Technologies, 124, 102957. |
| [126] | Yu K., Berkovsky S., Conway D., Taib R., Zhou J., & Chen F. (2018). Do I trust a machine? Differences in user trust based on system performance. In J. Zhou & F. Chen (Eds.), Human and machine learning (pp. 161-172). Springer. |
| [127] | Yuan L., Gao X., Zheng Z., Edmonds M., Wu Y. N., Rossano F.,... Zhu S. C. (2022). In situ bidirectional human-robot value alignment. Science Robotics, 7(68), eabm4183. https://doi.org/10.1126/scirobotics.abm4183 |
| [128] | Zhang T., Tao D., Qu X., Zhang X., Lin R., & Zhang W. (2019). The roles of initial trust and perceived risk in public's acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies, 98, 207-220. |
| [129] | Zhang T., Yang J., Chen M., Li Z., Zang J., & Qu X. (2024). EEG-based assessment of driver trust in automated vehicles. Expert Systems with Applications, 246, 123196. |
| [130] | Zhou L., Paul S., Demirkan H., Yuan L., Spohrer J., Zhou M., & Basu J. (2021). Intelligence augmentation: Towards building human-machine symbiotic relationship. AIS Transactions on Human-Computer Interaction, 13(2), 243-264. |
| [1] | LI Yan, CHEN Wei, WU Ruijuan. Marketing effect of virtual influencers and its mechanisms in the context of AI technology [J]. Advances in Psychological Science, 2025, 33(8): 1425-1442. |
| [2] | TAN Meili, YIN Xiangzhou, ZHANG Guanglei, XIONG Puzhen. Workplace artificial intelligence role classification: Impacts on employee psychology and behavior and coping strategies [J]. Advances in Psychological Science, 2025, 33(6): 933-947. |
| [3] | XI Meng, LIU Yue-Yue, LI Xin, LI Jia-Xin, SHI Jia-Zhen. The influence of algorithmic human resource management on employee algorithmic coping behavior and job performance [J]. Advances in Psychological Science, 2025, 33(6): 948-964. |
| [4] | XIANG Diandian, YIN Yule, GE Mengqi, WANG Zihan. The impact of brand-developing versus collaborative virtual influencer endorsement selection strategies on consumer engagement [J]. Advances in Psychological Science, 2025, 33(6): 965-983. |
| [5] | HUANG Hanjing, RAU Pei-Luen Patrick. Exploration of multi-level human-machine integration theory between elderly users and intelligent systems [J]. Advances in Psychological Science, 2025, 33(2): 223-235. |
| [6] | PENG Chenming, QU Yifan, GUO Xiaoling, CHEN Zengxiang. The double-edged sword effect of artificial intelligence services on consumer moral behavior [J]. Advances in Psychological Science, 2025, 33(2): 236-255. |
| [7] | WENG Zhigang, CHEN Xiaoxiao, ZHANG Xiaomei, ZHANG Ju. Social presence oriented toward new human-machine relationships [J]. Advances in Psychological Science, 2025, 33(1): 146-162. |
| [8] | WU Bo, ZHANG Aojie, CAO Fei. Professional design, user design, or AI design? The psychological mechanism of the source of design effect [J]. Advances in Psychological Science, 2024, 32(6): 995-1009. |
| [9] | HOU Hanchao, NI Shiguang, LIN Shuya, WANG Pusheng. When AI learns to empathize: Topics, scenarios, and optimization of empathy computing from a psychological perspective [J]. Advances in Psychological Science, 2024, 32(5): 845-858. |
| [10] | SHU Lifang, WANG Kui, WU Yueyan, CHEN Siyun. Multi-stage impacts of artificial intelligence coaches on consumers’ long-term goal pursuit and its mechanism [J]. Advances in Psychological Science, 2024, 32(3): 451-464. |
| [11] | HUANG Xinyu, LI Ye. Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction [J]. Advances in Psychological Science, 2024, 32(3): 527-542. |
| [12] | ZHENG Yuanxia, LIU Guoxiong, XIN Cong, CHENG Li. Judging a book by its cover: The influence of facial features on children’s trust judgments [J]. Advances in Psychological Science, 2024, 32(2): 300-317. |
| [13] | WANG Yongyue, HUANG Piaopiao, JIN Yanghua, BAI Xinwen, YUE Fengkai, ZHANG Fanying, GUO Zihao. Technical hollowing out of knowledge workers in the manufacturing industry in artificial intelligence context: The definition, formation and influence mechanism [J]. Advances in Psychological Science, 2024, 32(12): 2005-2017. |
| [14] | QI Yue, CHEN Junting, QIN Shaotian, DU Feng. Human-AI mutual trust in the era of artificial general intelligence [J]. Advances in Psychological Science, 2024, 32(12): 2124-2136. |
| [15] | YIN Meng, NIU Xiongying. Dancing with AI: AI-employee collaboration in the systemic view [J]. Advances in Psychological Science, 2024, 32(1): 162-176. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||