心理科学进展 ›› 2024, Vol. 32 ›› Issue (3): 527-542.doi: 10.3724/SP.J.1042.2024.00527
收稿日期:
2023-06-23
出版日期:
2024-03-15
发布日期:
2024-01-19
通讯作者:
李晔, E-mail: liye@ccnu.edu.cn
基金资助:
Received:
2023-06-23
Online:
2024-03-15
Published:
2024-01-19
摘要:
信任是人机成功合作的基础。但个体在人机交互中并不总是持有恰当的信任水平, 也可能会出现信任偏差:过度信任和信任不足。信任偏差会妨碍人机合作, 因此需要对信任进行校准。信任校准常常通过信任抑制与信任提升两条途径来实现。信任抑制聚焦于如何降低个体对机器人过高的信任水平, 信任提升则侧重于如何提高个体对机器人较低的信任水平。未来研究可进一步优化校准效果评估的测量方法、揭示信任校准过程中以及信任校准后个体的认知变化机制、探索信任校准的边界条件以及个性化和精细化的信任校准策略, 以期助推人机协作。
中图分类号:
黄心语, 李晔. (2024). 人机信任校准的双途径:信任抑制与信任提升. 心理科学进展 , 32(3), 527-542.
HUANG Xinyu, LI Ye. (2024). Trust dampening and trust promoting: A dual-pathway of trust calibration in human-robot interaction. Advances in Psychological Science, 32(3), 527-542.
研究者 | 信任校准途径 | 机器人类型 | 信任测量 | 研究的信任阶段 | 具体校准策略 |
---|---|---|---|---|---|
Buçinca et al., | 信任抑制 | 虚拟人工智能 | 信任单维测量+行为测量 | 人工智能给出建议后 | 解释+认知强迫训练 |
Beller et al., | 信任抑制 | 自动化驾驶系统 | 信任单维测量+行为测量 | 每轮交互结束之后 | 呈现不确定性信息 |
Wang et al., | 信任抑制 | 机械机器人/动物机器人 | 人际信任量表改编 | 每轮交互结束之后 | 解释+化身 |
Lyons et al., | 信任提升 | 机械机器人 | 人际信任量表改编 | 机器人出现非预期行为前后 | 承认自己的过失+解释非预期行为出现的原因 |
Kim & Song, | 信任提升 | 虚拟智能代理 | 人-机器信任问卷改编+行为信任测量(依从性) | 每轮交互结束之后 | 拟人化+道歉 |
Sebo et al., | 信任提升 | 类人机器人NAO | 人际信任问卷改编+行为测量 | 人机交互之后 | 否认+道歉 |
表1 国外部分人机信任校准研究汇总
研究者 | 信任校准途径 | 机器人类型 | 信任测量 | 研究的信任阶段 | 具体校准策略 |
---|---|---|---|---|---|
Buçinca et al., | 信任抑制 | 虚拟人工智能 | 信任单维测量+行为测量 | 人工智能给出建议后 | 解释+认知强迫训练 |
Beller et al., | 信任抑制 | 自动化驾驶系统 | 信任单维测量+行为测量 | 每轮交互结束之后 | 呈现不确定性信息 |
Wang et al., | 信任抑制 | 机械机器人/动物机器人 | 人际信任量表改编 | 每轮交互结束之后 | 解释+化身 |
Lyons et al., | 信任提升 | 机械机器人 | 人际信任量表改编 | 机器人出现非预期行为前后 | 承认自己的过失+解释非预期行为出现的原因 |
Kim & Song, | 信任提升 | 虚拟智能代理 | 人-机器信任问卷改编+行为信任测量(依从性) | 每轮交互结束之后 | 拟人化+道歉 |
Sebo et al., | 信任提升 | 类人机器人NAO | 人际信任问卷改编+行为测量 | 人机交互之后 | 否认+道歉 |
[1] |
高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. (2021). 自动驾驶车中的人机信任. 心理科学进展, 29(12), 2172-2183.
doi: 10.3724/SP.J.1042.2021.02172 |
[2] |
许丽颖, 喻丰, 邬家骅, 韩婷婷, 赵靓. (2017). 拟人化: 从“它”到“他”. 心理科学进展, 25(11), 1942-1954.
doi: 10.3724/SP.J.1042.2017.01942 |
[3] |
许丽颖, 喻丰, 周爱钦, 杨沈龙, 丁晓军. (2019). 萌: 感知与后效. 心理科学进展, 27(4), 689-699.
doi: 10.3724/SP.J.1042.2019.00689 |
[4] |
严瑜, 吴霞. (2016). 从信任违背到信任修复: 道德情绪的作用机制. 心理科学进展, 24(4), 633-642.
doi: 10.3724/SP.J.1042.2016.00633 |
[5] | 杨正宇, 王重鸣, 谢小云. (2003). 团队共享心理模型研究新进展. 人类工效学, 9(3), 34-37. |
[6] | 乐国安, 韩振华. (2009). 信任的心理学研究与展望. 西南大学学报(社会科学版), 35(2), 1-5. |
[7] |
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.
doi: 10.1109/ACCESS.2018.2870052 URL |
[8] | Alarcon, G. M., Gibson, A. M., & Jessup, S. A. (2020, September). Trust repair in performance, process, and purpose factors of human-robot trust. In 2020 IEEE International Conference on Human-Machine Systems (ICHMS) (pp. 1-6). Rome, Italy. |
[9] | Ali, A., Tilbury, D. M., & Jr, L. R. (2022). Considerations for task allocation in human-robot teams. arXiv preprint arXiv:2210.03259. |
[10] |
Aroyo, A. M., de Bruyne, J., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H.,... Tamò-Larrieux, A. (2021). Overtrusting robots: Setting a research agenda to mitigate overtrust in automation. Paladyn, Journal of Behavioral Robotics, 12(1), 423-436.
doi: 10.1515/pjbr-2021-0029 URL |
[11] |
Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3, 41-52.
doi: 10.1007/s12369-010-0082-7 URL |
[12] | Barfield, J. K. (2021, August). Self-disclosure of personal information, robot appearance, and robot trustworthiness. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 67-72).Vancouver, BC, Canada. |
[13] |
Beller, J., Heesen, M., & Vollrath, M. (2013). Improving the driver-automation interaction: An approach using automation uncertainty. Human Factors, 55(6), 1130-1141.
pmid: 24745204 |
[14] | Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y.,... Eckersley, P. (2020, January). Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 648-657). 648-657). https://doi.org/10.1145/3351095.3375624 |
[15] | Biswas, M., & Murray, J. C. (2015, September). Towards an imperfect robot for long-term companionship: Case studies using cognitive biases. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5978-5983). Hamburg, Germany. |
[16] |
Borau, S., Otterbring, T., Laporte, S., & Fosso Wamba, S. (2021). The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI. Psychology & Marketing, 38(7), 1052-1068.
doi: 10.1002/mar.v38.7 URL |
[17] | Borenstein, J., Wagner, A. R., & Howard, A. (2018). Overtrust of pediatric health-care robots: A preliminary survey of parent perspectives. IEEE Robotics & Automation Magazine, 25(1), 46-54. |
[18] |
Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3-4), 167-175.
doi: 10.1016/S0921-8890(02)00373-1 URL |
[19] | Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5, CSCW1, 1-21. |
[20] |
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825.
doi: 10.1177/0022243719851788 |
[21] |
Chen, J. Y., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness- based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259-282.
doi: 10.1080/1463922X.2017.1315750 URL |
[22] | Chiarella, S. G., Torromino, G., Gagliardi, D. M., Rossi, D., Babiloni, F., & Cartocci, G. (2022). Investigating the negative bias towards Artificial Intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Computers in Human Behavior, 137(C), 107406. |
[23] | Chien, S. Y., Lewis, M., Sycara, K., Liu, J. S., & Kumru, A. (2016, October). Influence of cultural factors in dynamic trust in automation. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 2884-2889). Budapest, Hungary. |
[24] | Correia, F., Guerra, C., Mascarenhas, S., Melo, F. S., & Paiva, A. (2018, July). Exploring the impact of fault justification in human-robot trust. In Proceedings of the 17th international conference on autonomous agents and multiagent systems (pp. 507-513). Stockholm, Sweden. |
[25] | Cymek, D. H., Truckenbrodt, A., & Onnasch, L. (2023). Lean back or lean in? Exploring social loafing in human- robot teams. Frontiers in Robotics and AI, 10, 1249252, doi: 10.3389/frobt.2023.1249252. |
[26] |
de Visser, E. J., Beatty, P. J., Estepp, J. R., Kohn, S., Abubshait, A., Fedota, J. R., & McDonald, C. G. (2018). Learning from the slips of others: Neural correlates of trust in automated agents. Frontiers in Human Neuroscience, 12, 309.
doi: 10.3389/fnhum.2018.00309 pmid: 30147648 |
[27] |
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349.
doi: 10.1037/xap0000092 URL |
[28] |
de Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459-478.
doi: 10.1007/s12369-019-00596-x |
[29] |
Demir, K. A., Döven, G., & Sezen, B. (2019). Industry 5.0 and human-robot co-working. Procedia Computer Science, 158, 688-695.
doi: 10.1016/j.procs.2019.09.104 URL |
[30] |
Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302-1314.
doi: 10.1177/0956797620948841 URL |
[31] |
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126.
doi: 10.1037/xge0000033 URL |
[32] | Dijkstra, J. J. (1999). User agreement with incorrect expert system advice. Behaviour & Information Technology, 18(6), 399-411. |
[33] |
Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115-133.
doi: 10.1080/19312458.2021.1968361 URL |
[34] |
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697-718.
doi: 10.1016/S1071-5819(03)00038-7 URL |
[35] |
Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors, 44(1), 79-94.
pmid: 12118875 |
[36] | Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I., Muller, M., & Riedl, M. O. (2021). The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint, arXiv:2107.13509. |
[37] |
Eloy, L., Doherty, E. J., Spencer, C. A., Bobko, P., & Hirshfield, L. (2022). Using fNIRS to identify transparency- and reliability-sensitive markers of trust across multiple timescales in collaborative human-human-agent triads. Frontiers in Neuroergonomics, 3, 838625.
doi: 10.3389/fnrgo.2022.838625 URL |
[38] |
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
doi: 10.1037/0033-295X.114.4.864 pmid: 17907867 |
[39] | Esterwood, C., & Robert, L. P. (2021, August). Do you still trust me? Human-robot trust repair strategies. Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication. Vancouver, BC, Canada. |
[40] | Esterwood, C., & Robert, L. P. (2022, March). Having the right attitude: How attitude impacts trust repair in human-robot interaction. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 332-341). Sapporo, Japan. |
[41] |
Filiz, I., Judek, J. R., Lorenz, M., & Spiwoks, M. (2021). Reducing algorithm aversion through experience. Journal of Behavioral and Experimental Finance, 31, 100524.
doi: 10.1016/j.jbef.2021.100524 URL |
[42] |
Formosa, P., Rogers, W., Griep, Y., Bankins, S., & Richards, D. (2022). Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Computers in Human Behavior, 133, 107296.
doi: 10.1016/j.chb.2022.107296 URL |
[43] |
Geraci, A., D’Amico, A., Pipitone, A., Seidita, V., & Chella, A. (2021). Automation inner speech as an anthropomorphic feature affecting human trust: Current issues and future directions. Frontiers in Robotics and AI, 8, 620026.
doi: 10.3389/frobt.2021.620026 URL |
[44] |
Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121-127.
doi: 10.1136/amiajnl-2011-000089 pmid: 21685142 |
[45] | Groom, V., Chen, J., Johnson, T., Kara, F. A., & Nass, C. (2010, March). Critic, compatriot, or chump? Responses to robot blame attribution. In 2010 5th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 211-217). IEEE. |
[46] | Hald, K., Weitz, K., André, E., & Rehm, M. (2021, November). “An Error Occurred!” Trust repair with virtual robot using levels of mistake explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction (pp. 218-226). Virtual Event Japan. |
[47] | Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Eder, K. (2016, August). Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction. In 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 493-500). New York. |
[48] |
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R. (2011). A meta- analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
pmid: 22046724 |
[49] |
Hancock, P. A., Kessler, T. T., Kaplan, A. D., Brill, J. C., & Szalma, J. L. (2021). Evolving trust in robots: Specification through sequential and comparative meta-analyses. Human Factors, 63(7), 1196-1229.
doi: 10.1177/0018720820922080 URL |
[50] | Haring, K. S., Matsumoto, Y., & Watanabe, K. (2013). How do people perceive and trust a lifelike robot. In Proceedings of the world congress on engineering and computer science (pp. 425-430). San Francisco, USA. |
[51] |
Haring, K. S., Satterfield, K. M., Tossell, C. C., de Visser, E. J., Lyons, J. R., Mancuso, V. F.,... Funke, G. J. (2021). Robot authority in human-robot teaming: Effects of human-likeness and physical embodiment on compliance. Frontiers in Psychology, 12, 625713.
doi: 10.3389/fpsyg.2021.625713 URL |
[52] |
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.
doi: 10.1177/0018720814547570 pmid: 25875432 |
[53] | Hopko, S. K., & Mehta, R. K. (2022). Trust in shared-space collaborative robots: Shedding light on the human brain. Human Factors, 66(2). https://doi.org/10.1177/00187208221109039 |
[54] | Hou, Y. T. Y., & Jung, M. F. (2021). Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 477. |
[55] | Jensen, T., Albayram, Y., Khan, M. M. H., Fahim, M. A. A., Buck, R., & Coman, E. (2019, June). The apple does fall far from the tree: User separation of a system from its developers in human-automation trust repair. In Proceedings of the 2019 on Designing Interactive Systems Conference (pp. 1071-1082). San Diego, CA, USA. |
[56] | Jessup, S. A., Gibson, A., Capiola, A. A., Alarcon, G. M., & Borders, M. (2020, January). Investigating the effect of trust manipulations on affect over time in human-human versus human-robot interactions. Proceedings of the 53rd Hawaii International Conference on System Sciences (pp. 1-10). |
[57] | Jung, Y., & Lee, K. M. (2004). Effects of physical embodiment on social presence of social robots. Proceedings of PRESENCE, 80-87. |
[58] | Kaniarasu, P., & Steinfeld, A. M. (2014, August). Effects of blame on trust in human robot interaction. In The 23rd IEEE international symposium on robot and human interactive communication (pp. 850-855). Edinburgh, Scotland, UK. |
[59] | Khavas, Z. R. (2021). A review on trust in human-robot interaction. arXiv preprint, arXiv:2105.10045. |
[60] | Khavas, Z. R., Ahmadzadeh, S. R., & Robinette, P. (2020, November). Modeling trust in human-robot interaction: A survey. In Social Robotics: 12th International Conference, ICSR (pp. 529-541). https://doi.org/10.1007/978-3-030-62056-1_44 |
[61] |
Kim, D., & Kim, S. (2021). A model for user acceptance of robot journalism: Influence of positive disconfirmation and uncertainty avoidance. Technological Forecasting and Social Change, 163, 120448.
doi: 10.1016/j.techfore.2020.120448 URL |
[62] |
Kim, P. H., Dirks, K. T., & Cooper, C. D. (2009). The repair of trust: A dynamic bilateral perspective and multilevel conceptualization. Academy of Management Review, 34(3), 401-422.
doi: 10.5465/amr.2009.40631887 URL |
[63] |
Kim, P. H., Ferrin, D. L., Cooper, C. D., & Dirks, K. T. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence-versus integrity-based trust violations. Journal of Applied Psychology, 89(1), 104-118.
doi: 10.1037/0021-9010.89.1.104 URL |
[64] | Kim, T., & Hinds, P. (2006, September). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In ROMAN 2006-The 15th IEEE international symposium on robot and human interactive communication (pp. 80-85). Hatfield, UK. |
[65] |
Kim, T., & Song, H. (2021). How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics, 61, 101595.
doi: 10.1016/j.tele.2021.101595 URL |
[66] | Kox, E. S., Kerstholt, J. H., Hueting, T. F., & de Vries, P. W. (2021). Trust repair in human-agent teams: The effectiveness of explanations and expressing regret. Autonomous Agents and Multi-Agent Systems, 35(2), 30. |
[67] | Kraus, J., Scholz, D., Messner, E. M., Messner, M., & Baumann, M. (2020). Scared to trust?-Predicting trust in highly automated driving by depressiveness, negative self-evaluations and state anxiety. Frontiers in Psychology, 10, 2917, doi: 10.3389/fpsyg.2019.02917. |
[68] |
Kraus, J., Scholz, D., Stiegemeier, D., & Baumann, M. (2020). The more you know: Trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Human Factors, 62(5), 718-736.
doi: 10.1177/0018720819853686 pmid: 31233695 |
[69] | Kundinger, T., Wintersberger, P., & Riener, A. (2019, May). (Over) Trust in automated driving: The sleeping pill of tomorrow? In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-6). Glasgow, Scotland UK. |
[70] |
Kunze, A., Summerskill, S. J., Marshall, R., & Filtness, A. J. (2019). Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics, 62(3), 345-360.
doi: 10.1080/00140139.2018.1547842 pmid: 30501566 |
[71] | Kwon, J. H., Jung, S. H., Choi, H. J., & Kim, J. (2021). Antecedent factors that affect restaurant brand trust and brand loyalty: Focusing on US and Korean consumers. Journal of Product & Brand Management, 30(7), 990-1015. |
[72] |
Lee, J. D., & Kolodge, K. (2020). Exploring trust in self- driving vehicles through text analysis. Human Factors, 62(2), 260-277.
doi: 10.1177/0018720819872672 URL |
[73] |
Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243-1270.
doi: 10.1080/00140139208967392 pmid: 1516577 |
[74] |
Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153-184.
doi: 10.1006/ijhc.1994.1007 URL |
[75] |
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.
doi: 10.1518/hfes.46.1.50_30392 pmid: 15151155 |
[76] | Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1-16. |
[77] | Lee, M. K., Kiesler, S., Forlizzi, J., Srinivasa, S., & Rybski, P. (2010, March). Gracefully mitigating breakdowns in robotic services. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 203-210). Osaka, Japan. |
[78] | Lee, S. L., Lau, I. Y. M., Kiesler, S., & Chiu, C. Y. (2005, April). Human mental models of humanoid robots. In Proceedings of the 2005 IEEE international conference on robotics and automation (pp. 2767-2772). Barcelona, Spain. |
[79] |
Li, D., Rau, P. P., & Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2, 175-186.
doi: 10.1007/s12369-010-0056-9 URL |
[80] |
Liu, X. S., Yi, X. S., & Wan, L. C. (2022). Friendly or competent? The effects of perception of robot appearance and service context on usage intention. Annals of Tourism Research, 92, 103324.
doi: 10.1016/j.annals.2021.103324 URL |
[81] | Löffler, D., Dörrenbächer, J., & Hassenzahl, M. (2020, March). The uncanny valley effect in zoomorphic robots: The U-shaped relation between animal likeness and likeability. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 261-270). Cambridge, United Kingdom. |
[82] |
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
doi: 10.1016/j.obhdp.2018.12.005 |
[83] |
Lyell, D., & Coiera, E. (2017). Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association, 24(2), 423-431.
doi: 10.1093/jamia/ocw105 pmid: 27516495 |
[84] |
Lyons, J. B., Hamdan, I., & Vo, T. Q. (2023). Explanations and trust: What happens to trust when a robot partner does something unexpected? Computers in Human Behavior, 138, 107473.
doi: 10.1016/j.chb.2022.107473 URL |
[85] | Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., & Wynne, K. T. (2020, September). The role of individual differences as predictors of trust in autonomous security robots. In 2020 IEEE International Conference on Human- Machine Systems (ICHMS) (pp. 1-5). Rome, Italy. |
[86] | Lyons, J. B., Sadler, G. G., Koltai, K., Battiste, H., Ho, N. T., Hoffmann, L. C.,... Shively, R. (2017). Shaping trust through transparent design:Theoretical and experimental guidelines. In: Savage-Knepshield, P., & Chen, J (Eds.), Advances in Human Factors in Robots and Unmanned Systems (pp. 127-136). Springer International Publishing. |
[87] |
Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human- automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
doi: 10.1080/14639220500337708 URL |
[88] | Martinez, J. E., VanLeeuwen, D., Stringam, B. B., & Fraune, M. R. (2023, March). Hey?! What did you think about that robot? Groups polarize users’ acceptance and trust of food delivery robots. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 417-427). https://doi.org/10.1145/3568162.3576984 |
[89] |
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.
doi: 10.2307/258792 URL |
[90] |
McGuirl, J. M., & Sarter, N. B. (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4), 656-665.
pmid: 17240714 |
[91] |
Meng, J., & Berger, B. K. (2019). The impact of organizational culture and leadership performance on PR professionals’ job satisfaction: Testing the joint mediating effects of engagement and trust. Public Relations Review, 45(1), 64-75.
doi: 10.1016/j.pubrev.2018.11.002 URL |
[92] |
Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2013). I trust it, but I don’ t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3), 520-534.
doi: 10.1177/0018720812465081 URL |
[93] |
Mirnig, N., Stollnberger, G., Miksch, M., Stadler, S., Giuliani, M., & Tscheligi, M. (2017). To err is robot: How humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI, 4, 21.
doi: 10.3389/frobt.2017.00021 URL |
[94] |
Montague, E., & Xu, J. (2012). Understanding active and passive users: The effects of an active user using normal, hard and unreliable technologies on user assessment of trust in technology and co-user. Applied Ergonomics, 43(4), 702-712.
doi: 10.1016/j.apergo.2011.11.002 pmid: 22192788 |
[95] |
Montague, E., Xu, J., & Chiou, E. (2014). Shared experiences of technology and trust: An experimental study of physiological compliance between active and passive users in technology-mediated collaborative encounters. IEEE Transactions on Human-Machine Systems, 44(5), 614-624.
doi: 10.1109/THMS.2014.2325859 URL |
[96] | Mosier, K. L., & Skitka, L. J. (1996). Human decision makers and automated decision aids:Made for each other? In Parasuraman, R., & Mouloua. M (Eds.), Automation and human performance (pp. 201-220). CRC Press. |
[97] |
Müller, R., Schischke, D., Graf, B., & Antoni, C. H. (2023). How can we avoid information overload and techno-frustration as a virtual team? The effect of shared mental models of information and communication technology on information overload and techno-frustration. Computers in Human Behavior, 138, 107438.
doi: 10.1016/j.chb.2022.107438 URL |
[98] |
Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2021). Explainable recommendation: When design meets trust calibration. World Wide Web, 24(5), 1857-1884.
doi: 10.1007/s11280-021-00916-0 |
[99] |
Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941.
doi: 10.1016/j.ijhcs.2022.102941 URL |
[100] |
Oh, S., Seong, Y., Yi, S., & Park, S. (2020). Neurological measurement of human trust in automation using electroencephalogram. International Journal of Fuzzy Logic and Intelligent Systems, 20(4), 261-271.
doi: 10.5391/IJFIS.2020.20.4.261 URL |
[101] |
Okamura, K., & Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. Plos One, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132
doi: 10.1371/journal.pone.0229132 URL |
[102] |
Okuoka, K., Enami, K., Kimoto, M., & Imai, M. (2022). Multi-device trust transfer: Can trust be transferred among multiple devices? Frontiers in Psychology, 13, 920844.
doi: 10.3389/fpsyg.2022.920844 URL |
[103] | Onnasch, L., & Panayotidis, T. (2020, December). Social loafing with robots-An empirical investigation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 64(1), 97-101. |
[104] | Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. G. (2013, March). Building appropriate trust in human-robot teams. In Proceedings of the 2013 AAAI Spring Symposium (pp. 60-65). Palo Alto, CA, USA. |
[105] | Papenmeier, A., Englebienne, G., & Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv preprint, arXiv:1907.12652. |
[106] |
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381-410.
pmid: 21077562 |
[107] |
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253.
doi: 10.1518/001872097778543886 URL |
[108] | Perkins, R., Khavas, Z. R., & Robinette, P. (2021). Trust calibration and trust respect: A method for building team cohesion in human robot teams. arXiv preprint, arXiv: 2110.06809. |
[109] |
Petrocchi, S., Iannello, P., Lecciso, F., Levante, A., Antonietti, A., & Schulz, P. J. (2019). Interpersonal trust in doctor-patient relation: Evidence from dyadic analysis and association with quality of dyadic communication. Social Science & Medicine, 235, 112391.
doi: 10.1016/j.socscimed.2019.112391 URL |
[110] |
Pop, V. L., Shrewsbury, A., & Durso, F. T. (2015). Individual differences in the calibration of trust in automation. Human Factors, 57(4), 545-556.
doi: 10.1177/0018720814564422 pmid: 25977317 |
[111] | Pynadath, D. V., Wang, N., & Kamireddy, S. (2019, September). A Markovian method for predicting trust behavior in human-agent interaction. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 171-178). Kyoto, Japan. |
[112] | Quinn, D. B. (2018). Exploring the efficacy of social trust repair in human-automation interactions (Unpublished doctoral dissertation). Clemson University, Lawton. |
[113] | Ragni, M., Rudenko, A., Kuhnert, B., & Arras, K. O. (2016, August). Errare humanum est: Erroneous robots in human- robot interaction. In 2016 25th IEEE International symposium on robot and human interactive communication (RO-MAN) (pp. 501-506). New York, NY, USA. |
[114] |
Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95-112.
doi: 10.1037/0022-3514.49.1.95 URL |
[115] | Robinette, P., Howard, A. M., & Wagner, A. R. (2015, October). Timing is key for robot trust repair. In Social Robotics:7th International Conference, ICSR. Paris, France. |
[116] | Robinette, P., Howard, A. M., & Wagner, A. R. (2017a). Conceptualizing overtrust in robots: Why do people trust a robot that previously failed?. In Lawless, W., Mittu, R., Sofge, D., & Russell, S (Eds), Autonomy and artificial intelligence: A threat or savior? (pp.129-155). Springer, Cham. |
[117] |
Robinette, P., Howard, A. M., & Wagner, A. R. (2017b). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human- Machine Systems, 47(4), 425-436.
doi: 10.1109/THMS.2017.2648849 URL |
[118] | Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016, March). Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 101-108). Christchurch, New Zealand. |
[119] | Rossi, A., Dautenhahn, K., Koay, K. L., & Walters, M. L. (2017, November). Human perceptions of the severity of domestic robot errors. In Social Robotics:9th International Conference (ICSR) (pp. 647-656).Tsukuba, Japan. |
[120] |
Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics, 5, 313-323.
doi: 10.1007/s12369-013-0196-9 URL |
[121] |
Sanders, T. L., Kaplan, A., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors, 61(4), 614-626.
doi: 10.1177/0018720818816838 pmid: 30601683 |
[122] | Sanders, T. L., MacArthur, K., Volante, W., Hancock, G., MacGillivray, T., Shugars, W., & Hancock, P. A. (2017, September). Trust and prior experience in human-robot interaction. In Proceedings of the human factors and ergonomics society annual meeting (pp. 1809-1813). Sage CA: Los Angeles, CA. |
[123] | Sarkar, S., Araiza-Illan, D., & Eder, K. (2017). Effects of faults, experience, and personality on trust in a robot co-worker. arXiv preprint, arXiv:1703.02335. |
[124] | Sebo, S. S., Krishnamurthi, P., & Scassellati, B. (2019, March). “I don't believe you”: Investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 57-65). Daegu, Korea (South). |
[125] |
Seong, Y., & Bisantz, A. M. (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7-8), 608-625.
doi: 10.1016/j.ergon.2008.01.007 URL |
[126] |
Shank, D. B., Bowen, M., Burns, A., & Dew, M. (2021). Humans are perceived as better, but weaker, than artificial intelligence: A comparison of affective impressions of humans, AIs, and computer systems in roles on teams. Computers in Human Behavior Reports, 3, 100092.
doi: 10.1016/j.chbr.2021.100092 URL |
[127] |
Shi, Y., Azzolin, N., Picardi, A., Zhu, T., Bordegoni, M., & Caruso, G. (2020). A Virtual reality-based platform to validate HMI design for increasing user’s trust in autonomous vehicle. Computer-Aided Design and Applications, 18(3), 502-518.
doi: 10.14733/cadaps URL |
[128] | Shin, D., Zaid, B., & Ibahrine, M. (2020, November). Algorithm appreciation: Algorithmic performance, developmental processes, and user interactions. In 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) (pp. 1-5). Sharjah, United Arab Emirates. |
[129] | Short, E., Hart, J., Vu, M., & Scassellati, B. (2010, March). No fair! An interaction with a cheating robot. In 2010 5th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 219-226). Osaka, Japan. |
[130] | Song, Y., & Luximon, Y. (2020). Trust in AI agent: A systematic review of facial anthropomorphic trustworthiness for social robot design. Sensors, 20(18), 5087. |
[131] | Sweller, J. (2011). Cognitive load theory. Psychology of Learning and Motivation, 55, 37-76. https://doi.org/10.1016/B978-0-12-387691-1.00002-8 |
[132] |
Tam, K. Y., & Ho, S. Y. (2005). Web personalization as a persuasion strategy: An elaboration likelihood model perspective. Information Systems Research, 16(3), 271-291.
doi: 10.1287/isre.1050.0058 URL |
[133] | Toader, D. C., Boca, G., Toader, R., Măcelaru, M., Toader, C., Ighian, D., & Rădulescu, A. T. (2019). The effect of social presence and chatbot errors on trust. Sustainability, 12(1), 256. |
[134] | Ullman, D., & Malle, B. F. (2017, March). Human-robot trust: Just a button press away. In Proceedings of the companion of the 2017 ACM/IEEE international conference on human-robot interaction (pp. 309-310). Vienna, Austria. |
[135] | van Maris, A., Lehmann, H., Natale, L., & Grzyb, B. (2017, March). The influence of a robot’ s embodiment on trust: A longitudinal study. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on human-robot interaction (pp. 313-314). Vienna, Austria. |
[136] |
van Pinxteren, M. M., Wetzels, R. W., Rüger, J., Pluymaekers, M., & Wetzels, M. (2019). Trust in humanoid robots: Implications for services marketing. Journal of Services Marketing, 33(4), 507-518.
doi: 10.1108/JSM-01-2018-0045 |
[137] |
Volante, W. G., Sosna, J., Kessler, T., Sanders, T., & Hancock, P. A. (2019). Social conformity effects on trust in simulation-based human-robot interaction. Human Factors, 61(5), 805-815.
doi: 10.1177/0018720818811190 pmid: 30431337 |
[138] | Wagner, A. R., Borenstein, J., & Howard, A. (2018). Overtrust in the robotic age. Communications of the ACM, 61(9), 22-24. |
[139] |
Walker, F., Wang, J., Martens, M. H., & Verwey, W. B. (2019). Gaze behaviour and electrodermal activity: Objective measures of drivers’ trust in automated vehicles. Transportation Research part F: Traffic Psychology and Behaviour, 64, 401-412.
doi: 10.1016/j.trf.2019.05.021 |
[140] | Wang, N., Pynadath, D. V., Rovira, E., Barnes, M. J., & Hill, S. G. (2018). Is it my looks? Or something I said? The impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In Ham, J., Karapanos, E., Morita, P., & Burns, C (Eds), Persuasive Technology (pp. 56-69). Springer, Cham. |
[141] | Washburn, A., Adeleye, A., An, T., & Riek, L. D. (2020). Robot errors in proximate HRI: How functionality framing affects perceived reliability and trust. ACM Transactions on Human-Robot Interaction (THRI), 9(3), 1-21. |
[142] | Wickens, C. D. (1995). Designing for situation awareness and trust in automation. IFAC Proceedings Volumes, 28(23), 365-370. |
[143] | Wullenkord, R., Fraune, M. R., Eyssel, F., & Šabanović, S. (2016, August). Getting in touch: How imagined, actual, and physical contact affect evaluations of robots. In2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 980-985). New York, USA. |
[144] | Xu, J., de’Aira, G. B., & Howard, A. (2018, August). Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 442-447). Nanjing, China. |
[145] | Xu, J., & Howard, A. (2018, August). The impact of first impressions on human-robot trust during problem-solving scenarios. In 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 435-441). Nanjing, China. |
[146] | Xu, J., & Montague, E. (2013, September). Group polarization of trust in technology. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 344-348). Sage CA: Los Angeles, CA. |
[147] | Yen, C., & Chiang, M. C. (2021). Trust me, if you can: A study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behaviour & Information Technology, 40(11), 1177-1194. |
[1] | 宋晓蕾, 董梅梅. 人际协同的多重表征模型——基于认知表征的视角[J]. 心理科学进展, 2023, 31(7): 1288-1302. |
[2] | 高在峰, 李文敏, 梁佳文, 潘晗希, 许为, 沈模卫. 自动驾驶车中的人机信任[J]. 心理科学进展, 2021, 29(12): 2172-2183. |
[3] | 蒋倩妮, 庄想灵, 马国杰. 自动驾驶汽车与行人交互中的沟通界面设计:基于行人过街决策模型的评估[J]. 心理科学进展, 2021, 29(11): 1979-1992. |
[4] | 许为, 葛列众. 人因学发展的新取向[J]. 心理科学进展, 2018, 26(9): 1521-1534. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||