心理学报 ›› 2024, Vol. 56 ›› Issue (4): 497-514.doi: 10.3724/SP.J.1041.2024.00497
收稿日期:
2023-08-22
发布日期:
2024-01-17
出版日期:
2024-04-25
通讯作者:
许丽颖, E-mali: liyingxu830@gmail.com; 喻丰, E-mail: psychpedia@whu.edu.cn
基金资助:
ZHAO Yijun, XU Liying(), YU Feng(), JIN Wanglong
Received:
2023-08-22
Online:
2024-01-17
Published:
2024-04-25
摘要:
职场中用算法作为人类决策的辅助和替代屡见不鲜, 但人们表现出算法厌恶。本研究通过4个递进实验在不同职场应用场景下比较了人们对于人类决策者与算法决策者所做决策的态度, 并探讨其内在机制和边界条件。结果发现: 在职场情境中, 相比于人类决策者, 人们对算法决策的可容许性、喜爱程度、利用意愿更低, 表现出“算法厌恶”。这一现象的内在心理机制是相比于人类决策, 人们认为算法决策者的决策更加不透明(实验2~3)。进一步研究发现, 当算法被赋予拟人化特征时人们扭转了对算法决策的厌恶, 提高了对其的接纳态度(实验4)。研究结果有助于更好地理解人们对算法决策的反应, 为推动社会治理智能化、引导算法使用伦理化提供启示。
中图分类号:
赵一骏, 许丽颖, 喻丰, 金旺龙. (2024). 感知不透明性增加职场中的算法厌恶. 心理学报, 56(4), 497-514.
ZHAO Yijun, XU Liying, YU Feng, JIN Wanglong. (2024). Perceived opacity leads to algorithm aversion in the workplace. Acta Psychologica Sinica, 56(4), 497-514.
因变量 | 中介效应值 | 95%间接效应LLCI | 95%间接效应ULCI | 直接效应值 | 95%直接效应LLCI | 95%直接效应ULCI |
---|---|---|---|---|---|---|
可容许性 | −0.50 | −0.96 | −0.08 | −0.38 | −0.85 | 0.09 |
喜欢程度 | −0.56 | −1.09 | −0.09 | −0.12 | −0.55 | 0.32 |
利用意愿 | −0.25 | −0.51 | −0.04 | −0.37 | −0.63 | −0.10 |
表1 实验2中介效应显著性检验的bootstrap分析及效应值
因变量 | 中介效应值 | 95%间接效应LLCI | 95%间接效应ULCI | 直接效应值 | 95%直接效应LLCI | 95%直接效应ULCI |
---|---|---|---|---|---|---|
可容许性 | −0.50 | −0.96 | −0.08 | −0.38 | −0.85 | 0.09 |
喜欢程度 | −0.56 | −1.09 | −0.09 | −0.12 | −0.55 | 0.32 |
利用意愿 | −0.25 | −0.51 | −0.04 | −0.37 | −0.63 | −0.10 |
[1] |
Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416.
doi: 10.1111/ijsa.v28.4 URL |
[2] |
Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2), 427-445.
doi: 10.1007/s12525-020-00414-7 |
[3] |
Ahmed, S., Alshater, M., Ammari, A., & Hammami, H. (2022). Artificial intelligence and machine learning in finance: A bibliometric review. Research in International Business and Finance, 61,101646.
doi: 10.1016/j.ribaf.2022.101646 URL |
[4] |
Andrews, D., Bonta, J., & Wormith, J. (2006). The recent past and near future of risk and/or need assessment. Crime and Delinquency, 52(1), 7-27.
doi: 10.1177/0011128705281756 URL |
[5] |
Badue, C., Guidolini, R., Carneiro, R., Azevedo, P., Cardoso, V., Forechi, A.,... de Souza, A. (2021). Self-driving cars: A survey. Expert Systems with Applications, 165,113816.
doi: 10.1016/j.eswa.2020.113816 URL |
[6] |
Barnes, C. M., Lucianetti, L., Bhave, D. P., & Christian, M. S. (2015). “You wouldn’t like me when I’m sleepy”: Leaders’ sleep, daily abusive supervision, and work unit engagement. Academy of Management Journal, 58(5), 1419-1437.
doi: 10.5465/amj.2013.1063 URL |
[7] |
Bartneck, C., Kulic, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71-81.
doi: 10.1007/s12369-008-0001-3 URL |
[8] | Basu, S., Majumdar, B., Mukherjee, K., Munjal, S., & Palaksha, C. (2023). The role of artificial intelligence in HRM: A systematic review and future research direction. Human Resource Management Review, 33(1), 100893. |
[9] | Benlian, A., Wiener, M., Alec Cram, W., Krasnova, H., Maedche, A., Möhlmann, M., Recker, J., & Remus, U. (2022). Algorithmic management: Bright and dark sides, practical implications, and research opportunities. Business & Information Systems Engineering, 64(6), 825-839. |
[10] |
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181,21-34.
doi: S0010-0277(18)30208-7 pmid: 30107256 |
[11] |
Blair, A., & Saffidine, A. (2019). AI surpasses humans at six-player poker. Science, 365(6456), 864-865.
doi: 10.1126/science.aay7774 pmid: 31467208 |
[12] |
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
doi: 10.1126/science.aaf2654 URL |
[13] | Bostrom, N. (2014). Superintelligence. New York: Oxford University Press. |
[14] |
Breckler, S. J. (1984). Empirical validation of affect, behavior, and cognition as distinct components of attitude. Journal of Personality and Social Psychology, 47(6), 1191-1205.
doi: 10.1037//0022-3514.47.6.1191 pmid: 6527214 |
[15] | Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. New York: W.W. Norton & Company. |
[16] | Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. |
[17] |
Burton, J., Stein, M., & Jensen, T. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
doi: 10.1002/bdm.2155 |
[18] |
Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636-1642.
doi: 10.1038/s41562-021-01146-0 pmid: 34183800 |
[19] |
Chalfin, A., Danieli, O., Hillis, A., Jelveh, Z., Luca, M., Ludwig, J., & Mullainathan, S. (2016). Productivity and selection of human capital with machine learning. The American Economic Review, 106(5), 124-127.
doi: 10.1257/aer.p20161029 URL |
[20] |
Chen, Z., Liu, Y., Meng, J., & Wang, Z. (2023). What’s in a face? An experiment on facial information and loan- approval decision. Management Science, 69(4), 2263-2283.
doi: 10.1287/mnsc.2022.4436 URL |
[21] | Cheng, M., & Hackett, R. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), 100698. |
[22] | Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic Press. |
[23] | Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery, 11(1), e1391. |
[24] |
Craig, K., Thatcher, J. B., Grover, V. (2019). The IT identity threat: A conceptual definition and operational measure. Journal of Management Information Systems, 36(1), 259-288.
doi: 10.1080/07421222.2018.1550561 |
[25] | Cummins, D. (1998). Social norms and other minds:The evolutionary roots of higher cognition. In D. D. Cummins & C. Allen (Eds), The evolution of mind (pp. 30-50). New York: Oxford University Press. |
[26] |
Dane, E., Rockmann, K. W., & Pratt, M. (2012). When should I trust my gut? Linking domain expertise to intuitive decision-making effectiveness. Organizational Behavior and Human Decision Processes, 119(2), 187-194.
doi: 10.1016/j.obhdp.2012.07.009 URL |
[27] |
de Visser, E., Monfort, S., McKendrick, R., Smith, M., McKnight, P., Krueger, F., & Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349.
doi: 10.1037/xap0000092 URL |
[28] |
Diab, D. L., Pui, S. Y., Yankelevich, M., & Highhouse, S. (2011). Lay perceptions of selection decision aids in US and non-US samples. International Journal of Selection and Assessment, 19(2), 209-216.
doi: 10.1111/ijsa.2011.19.issue-2 URL |
[29] |
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126.
doi: 10.1037/xge0000033 URL |
[30] | Dodge, J., Liao, Q., Zhang, Y., Bellamy, R., & Dugan, C. (2019, March). Explaining models: An empirical study of how explanations impact fairness judgment. Paper presented at the meeting of 24th ACM International Conference on Intelligent User Interfaces, Marina del Ray, United State. |
[31] |
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and App-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114-132.
doi: 10.1111/1748-8583.12258 |
[32] |
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
doi: 10.1037/0033-295X.114.4.864 pmid: 17907867 |
[33] |
Faul, F., Erdfelder, E., Buchner, A., & Lang, A-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160.
doi: 10.3758/BRM.41.4.1149 pmid: 19897823 |
[34] |
Ferrari, F., Paladino, M., & Jetten, J. (2016). Blurring human- machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302.
doi: 10.1007/s12369-016-0338-y URL |
[35] |
Filiz, I., Judek, J., Lorenz, M., & Spiwoks, M. (2021). Reducing algorithm aversion through experience. Journal of Behavioral and Experimental Finance, 31,100524.
doi: 10.1016/j.jbef.2021.100524 URL |
[36] | Fischer, S., & Petersen, T. (2018). Was Deutschland über Algorithmen weiß und denkt: Ergebnisse einer repräsentativen Bevölkerungsumfrage [What Germany knows and think about algorithms: Results of a representative survey]. Gütersloh, Germany: Bertelsmann Stiftung. |
[37] |
Fiske, S. T., Cuddy, A. J. C., Glick, P., Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878-902.
pmid: 12051578 |
[38] | Fraune, M. R. (2020). Our robots, our team: Robot anthropomorphism moderates group effects in human- robot teams. Frontiers in Psychology, 11, Article 1275. |
[39] |
Garg, S., Sinha, S., Kar, A., & Mani, M. (2022). A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management, 71(5), 1590-1610.
doi: 10.1108/IJPPM-08-2020-0427 URL |
[40] |
Gigerenzer, G., & Hug, K. (1992). Domain-specific reasoning: Social contracts, cheating and perspective change. Cognition, 43(2), 127-171.
pmid: 1617917 |
[41] |
Goods, C., Veen, A., & Barratt, T. (2019). “Is your gig any good?” Analysing job quality in the Australian platform- based food-delivery sector. Journal of Industrial Relations, 61(4), 502-527.
doi: 10.1177/0022185618817069 URL |
[42] | Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619. |
[43] |
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
doi: 10.1080/1047840X.2012.651387 pmid: 22754268 |
[44] |
Haesevoets, T., de Cremer, D., Dierckx, K., & van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119,106730.
doi: 10.1016/j.chb.2021.106730 URL |
[45] |
Han, M. (2021). The impact of anthropomorphism on consumers’ purchase decision in chatbot commerce. Journal of Internet Commerce, 20(1), 46-65.
doi: 10.1080/15332861.2020.1863022 URL |
[46] | Hao, K. (2019). AI is sending people to jail—and getting it wrong. MIT Technology Review, Retrieved January 21, 2019, from https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/ |
[47] | Hao, K. (2020). Doctors are using AI to triage covid-19 patients. The tools may be here to stay. MIT Technology. Review, Retrieved April 23, 2020, from https://www.technologyreview.com/2020/04/23/1000410/ai-triage-covid-19-patients-health-care/ |
[48] | Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York: Guilford Press. |
[49] | Hays, K. (2022). Facebook contractors learned they lost work with the company through a video call with anonymous representatives who said an ‘algorithm’ chose random people to cut, workers say. Business Insider. Retrieved August 19, 2022, from https://www.businessinsider.com/facebook-contract-workers-accenture-austin-lost-jobs-2022-8 |
[50] |
Herlocker, J., Konstan, J., Terveen, L., & Riedl, J. (2004). Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems, 22(1), 5-53.
doi: 10.1145/963770.963772 URL |
[51] |
Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does human-AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Computers in Human Behavior, 139,107502.
doi: 10.1016/j.chb.2022.107502 URL |
[52] |
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407-434.
doi: 10.1177/0018720814547570 URL |
[53] |
Holford, W. (2022). An ethical inquiry of the effect of cockpit automation on the responsibilities of airline pilots: Dissonance or meaningful control? Journal of Business Ethics, 176(1), 141-157.
doi: 10.1007/s10551-020-04640-z |
[54] | HR Daily Advisor Staff. (2017). Artificial Intelligence will become a regular part of HR in next 5 years. HR Daily Advisor. Retrieved June 8, 2017, from https://hrdailyadvisor.blr.com/2017/06/08/artificial-intelligence-will-become-regular-part-hr-next-5-years/ |
[55] | Hur, J. D., Koo, M., & Hofmann, M. (2015). When temptations come alive: How anthropomorphism undermines self-control. Journal of Consumer Research, 42(2), 340-358. |
[56] | Shahriari, K., & Shahriari, M. (2017). IEEE standard review — Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. Paper presented at the meeting of 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), Toronto, ON, Canada. |
[57] |
Ireland, L. (2020). Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors. Journal of Crime and Justice, 43(2), 174-192.
doi: 10.1080/0735648X.2019.1655781 URL |
[58] |
Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38-56.
doi: 10.5465/amd.2017.0002 URL |
[59] | Jussupow, E., Benbasat, I., & Heinzl, A. (2020, June). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. Paper presented at the meeting of the Proceedings of the 28th European Conference on Information Systems (ECIS), An Online AIS Conference, Marrakech, Morocco. |
[60] |
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review, 93(5), 1449-1475.
doi: 10.1257/000282803322655392 URL |
[61] |
Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3), 430-454.
doi: 10.1016/0010-0285(72)90016-3 URL |
[62] | Kharpal, A. (2023). A.I. poses existential risk of people being ‘harmed or killed,’ ex-Google CEO Eric Schmidt says. Consumer News and Business Channel. Retrieved May 24, 2023, from https://www.cnbc.com/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html |
[63] | Kinowska, H., & Sienkiewicz, Ł. J. (2022). Influence of algorithmic management practices on workplace well-being-evidence from European organisations. Information Technology & People, 36(8), 21-42. |
[64] | Komatsu, T. (2016, March). Japanese students apply same moral norms to humans and robot agents: Considering a moral HRI in terms of different cultural and academic backgrounds. Paper presented at the meeting of 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand. |
[65] | Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705. |
[66] |
Laakasuo, M., Palomäki, J., & Köbis, N. (2021). Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. International Journal of Social Robotics, 13(7), 1679-1688.
doi: 10.1007/s12369-020-00738-6 |
[67] |
Langer, M., König, C., & Busch, V. (2021). Changing the means of managerial work: Effects of automated decision support systems on personnel selection tasks. Journal of Business and Psychology, 36(5), 751-769.
doi: 10.1007/s10869-020-09711-6 |
[68] |
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217-234.
doi: 10.1111/ijsa.v27.3 URL |
[69] | Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 2053951718756684. |
[70] |
Lee, T., & Boynton, L. A. (2017). Conceptualizing transparency: Propositions for the integration of situational factors and stakeholders’ perspectives. Public Relations Inquiry, 6(3), 233-251.
doi: 10.1177/2046147X17694937 URL |
[71] |
Lehdonvirta, V. (2018). Flexibility in the gig economy: Managing time on three online piecework platforms. New Technology, Work, and Employment, 33(1), 13-29.
doi: 10.1111/ntwe.2018.33.issue-1 URL |
[72] |
Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M., & Mara, M. (2023). Effects of explainable artificial intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior, 139,107539.
doi: 10.1016/j.chb.2022.107539 URL |
[73] | Lemieux, P. (2017). Rise of the machines? Regulation: The Cato Reviwe of Business and Government, Retrieved December 13, 2017, from https://www.cato.org/sites/cato.org/files/serials/files/regulation/2017/12/regulation-v40n4.pdf |
[74] |
Leo, X., & Huh, Y. (2020). Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Computers in Human Behavior, 113,106520.
doi: 10.1016/j.chb.2020.106520 URL |
[75] |
Lerner, J., Li, Y., Valdesolo, P., & Kassam, K. (2015). Emotion and decision making. Annual Review of Psychology, 66(1), 799-823.
doi: 10.1146/psych.2015.66.issue-1 URL |
[76] |
Li, X., & Sung, Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in User-AI assistant interactions. Computers in Human Behavior, 118,106680.
doi: 10.1016/j.chb.2021.106680 URL |
[77] |
Liu, N., Kirshner, S., & Lim, E. (2023). Is algorithm aversion WEIRD? A cross-country comparison of individual- differences and algorithm aversion. Journal of Retailing and Consumer Services, 72,103259.
doi: 10.1016/j.jretconser.2023.103259 URL |
[78] |
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151,90-103.
doi: 10.1016/j.obhdp.2018.12.005 |
[79] | Lokhorst, G. J., & van den Hoven, J. (2011). Responsibility for military robots. In P. Lin, K. Abeney, & George A. Bekey (Eds.). Robot ethics: The ethical and social implications of robotics (pp. 145-156). Cambridge, MA: The MIT Press. |
[80] |
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
doi: 10.1093/jcr/ucz013 |
[81] |
Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. Journal of Marketing, 86(1), 91-108.
doi: 10.1177/0022242920957347 URL |
[82] | Mahmud, H., Islam, A., Ahmed, S., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting & Social Change, 175,121390. |
[83] | Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. Paper presented at the meeting of 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand. |
[84] |
May, F., & Monga, A. (2014). When time has a will of its own, the powerless don't have the will to wait: Anthropomorphism of time can decrease patience. Journal of Consumer Research, 40(5), 924-942.
doi: 10.1086/673384 URL |
[85] | McAloon, A. (2021). Xsolla lays off 150 after an algorithm ruled staff ‘unengaged and unproductive’. Game Developer. Retrieved August 10, 2021, from https://www.gamedeveloper.com/business/xsolla-lays-off-150-after-an-algorithm-ruled-staff-unengaged-and-unproductive-#close-modal |
[86] | McFarland, M. (2014). Elon Musk: ‘With artificial intelligence we are summoning the demon.’ The Washington Post. Retrieved October 24, 2014, from https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/ |
[87] | McNee, S., Riedl, J., & Konstan, J. (2006, April). Being accurate is not enough: How accuracy metrics have hurt recommender systems. Paper presented at the meeting of CHI ' 06 Extended Abstracts on Human Factors in Computing Systems, Montréal, Québec, Canada. |
[88] |
Mende, M., Scott, M., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service robots rising: How humanoid robots influence service experiences and elicit compensatory consumer responses. Journal of Marketing Research, 56(4), 535-556.
doi: 10.1177/0022243718822827 URL |
[89] |
Millet, K., Buehler, F., Du, G., & Kokkoris, M. (2023). Defending humankind: Anthropocentric bias in the appreciation of AI art. Computers in Human Behavior, 143,107707.
doi: 10.1016/j.chb.2023.107707 URL |
[90] |
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502-517.
doi: 10.1037/0033-295X.115.2.502 pmid: 18426301 |
[91] | Mori, M. (1970). The uncanny valley. Energy, 7,33-35. |
[92] | Mori, M., MacDorman, K. F., Kageki, N. (2012). The uncanny valley [From the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100. |
[93] | Moussawi, S., & Koufaris, M. (2019, January). Perceived intelligence and perceived anthropomorphism of personal intelligent agents: Scale development and validation. Paper presented at the meeting of Proceedings of the 52nd Annual Hawaii in International Conference on System Sciences, Hawaii, United State. |
[94] | Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. Paper presented at the meeting of Conference on Human Factors in Computing Systems, Boston, Massachusetts, United State. |
[95] | Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. Paper presented at the meeting of 2020 15th ACM/IEEE International Conference on Human- Robot Interaction (HRI), Cambridge, United Kingdom. |
[96] |
Nefdt, R. (2020). A puzzle concerning compositionality in machines. Minds and Machines, 30(1), 47-75.
doi: 10.1007/s11023-020-09519-6 |
[97] |
Newman, D., Fast, N., & Harmon, D. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160,149-167.
doi: 10.1016/j.obhdp.2020.03.008 URL |
[98] | Nicholson Price, W. (2018). Big data and black-box medical algorithms. Science Translational Medicine, 10(471). Article aao5333. |
[99] |
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231-259.
doi: 10.1037/0033-295X.84.3.231 URL |
[100] |
Nørskov, S., Damholdt, M., Ulhøi, J., Jensen, M., Ess, C., & Seibt, J. (2020). Applicant fairness perceptions of a robot-mediated job interview: A video vignette-based experimental survey. Frontiers in Robotics and AI, 7,586263.
doi: 10.3389/frobt.2020.586263 URL |
[101] | Park, H., Ahn, D., Hosanagar, K., & Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. Paper presented at the meeting of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. |
[102] | Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Boston: Harvard University Press. |
[103] |
Prahl, A., & van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691-702.
doi: 10.1002/for.v36.6 URL |
[104] |
Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137-141.
doi: 10.1007/s11747-019-00710-5 |
[105] | Schoeffer, J., Machowski, Y., & Kuehl, N. (2022). Perceptions of fairness and trustworthiness based on explanations in human vs. automated decision-making. Paper presented at the meeting of the Annual Hawaii International Conference on System Sciences, Hawaii, United State. |
[106] |
Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47(5), 609-612.
doi: 10.1016/j.jrp.2013.05.009 URL |
[107] | Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565. |
[108] |
Shin, D., & Park, Y. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98,277-284.
doi: 10.1016/j.chb.2019.04.019 URL |
[109] | Smith, B., & Shum, H. (2018). The future computed: Artificial intelligence and its role in society. Independently Published By Microsoft. |
[110] | Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media Society, 7(2), 205630512110088. |
[111] |
Tomprou, M., & Lee, M. (2022). Employment relationships in algorithmic management: A psychological contract perspective. Computers in Human Behavior, 126,106997.
doi: 10.1016/j.chb.2021.106997 URL |
[112] | Upadhye, C. (2018). How algorithms run Amazon’s warehouse. Medium. Retrieved December 27, 2018, from https://charviupadhye.medium.com/how-algorithms-run-amazons-warehouse-61e620ad27a7 |
[113] |
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232.
doi: 10.1177/1745691610369336 pmid: 24839457 |
[114] | Wen, Z., Zhang, L., Hou, J., & Liu, H. (2004). Testing and application of the mediating effects. Acta Psychologica Sinica, 36(5), 614-620. |
[温忠麟, 张雷, 侯杰泰, 刘红云. (2004). 中介效应检验程序及其应用. 心理学报, 36(5), 614-620.] | |
[115] |
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101,197-209
doi: 10.1016/j.chb.2019.07.027 |
[116] |
Wien, A., & Peluso, A. (2021). Influence of human versus AI recommenders: The roles of product type and cognitive processes. Journal of Business Research, 137,13-27.
doi: 10.1016/j.jbusres.2021.08.016 URL |
[117] |
Wu, M., Wang, N., & Yuen, K. (2023). Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles. Computers in Human Behavior, 141,107614.
doi: 10.1016/j.chb.2022.107614 URL |
[118] | Xu, L., & Yu, F. (2020). Factors that influence robot acceptance. Chinese Science Bulletin, 65(6), 496-510. |
[许丽颖, 喻丰. (2020). 机器人接受度的影响因素. 科学通报, 65(6), 496-510.] | |
[119] |
Xu, L., Yu, F., & Peng, K. (2022). Algorithmic discrimination causes less desire for moral punishment than human discrimination. Acta Psychologica Sinica, 54(9), 1076-1092.
doi: 10.3724/SP.J.1041.2022.01076 |
[许丽颖, 喻丰, 彭凯平. (2022). 算法歧视比人类歧视引起更少道德惩罚欲. 心理学报, 54(9), 1076-1092.]
doi: 10.3724/SP.J.1041.2022.01076 |
|
[120] |
Xu, L., Yu, F., Wu, J., Han, T., & Zhao, L. (2017). Anthropomorphism: Antecedents and consequences. Advances in Psychological Science, 25(11), 1942-1954.
doi: 10.3724/SP.J.1042.2017.01942 URL |
[许丽颖, 喻丰, 邬家骅, 韩婷婷, 赵靓. (2017). 拟人化: 从“它”到“他”. 心理科学进展, 25(11), 1942-1954.]
doi: 10.3724/SP.J.1042.2017.01942 |
|
[121] |
Yam, K., Bigman, Y., Tang, P., Ilies, R., de Cremer, D., Soh, H., & Gray, K. (2021). Robots at work: People prefer- and forgive-service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557-1572.
doi: 10.1037/apl0000834 URL |
[122] |
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403-414.
doi: 10.1002/bdm.2118 |
[123] | Yogeeswaran, K., Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., & Ishiguro, H. (2016). The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Journal of Human-Robot Interaction, 5(2), 29-47. |
[124] | Yu, H., Miao, C., Chen, Y., Fauvel, S., Li, X., & Lesser, V. (2017). Algorithmic management for improving collective productivity in crowdsourcing. Scientific Reports, 7(1), 12541. |
[125] |
Yuan, L., & Dennis, A. (2019). Acting like humans? Anthropomorphism and consumer’s willingness to pay in electronic commerce. Journal of Management Information Systems, 36(2), 450-477.
doi: 10.1080/07421222.2019.1598691 URL |
[126] | Zajonc, R. B. (1968). Attitude effects of mere exposure. Journal of Personality and Social Psychology, 9(2), 1-27. |
[127] |
Zhang, Y., Xu, L., Yu, F., Ding, X., Wu, J., & Zhao, L. (2022). A three-dimensional motivation model of algorithm aversion. Advances in Psychological Science, 30(5), 1093-1105.
doi: 10.3724/SP.J.1042.2022.01093 |
[张语嫣, 许丽颖, 喻丰, 丁晓军, 邬家骅, 赵靓. (2022). 算法拒绝的三维动机理论. 心理科学进展, 30(5), 1093-1105.]
doi: 10.3724/SP.J.1042.2022.01093 |
|
[128] | Zhao, T. (2018). Comprehensive considerations for the “revolution” of artificial intelligence: An ethical and ontological analysis. Philosophical Trends, (4), 5-12. |
[赵汀阳. (2018). 人工智能“革命”的“近忧”和“远虑” —— 一种伦理学和存在论的分析. 哲学动态, (4), 5-12.] | |
[129] | Zhao, T. (2019). How could AI develop its self-consciousness? Journal of Dialectics of Nature, 41(1), 1-8. |
[赵汀阳. (2019). 人工智能的自我意识何以可能? 自然辩证法通讯, 41(1), 1-8.] | |
[130] |
Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54.
doi: 10.1016/j.ijhcs.2016.12.008 URL |
[1] | 许丽颖, 王学辉, 喻丰, 彭凯平. 感知机器人威胁对职场物化的影响[J]. 心理学报, 2024, 56(2): 210-225. |
[2] | 宋琪, 张璐, 高莉芳, 程豹, 陈扬. “行高人非”还是“见贤思齐”?职场上行比较对员工行为的双刃剑效应[J]. 心理学报, 2023, 55(4): 658-670. |
[3] | 许销冰, 张忞硕, 曾帅帆, 范卓怡. 产品透明性对消费者品牌感知的影响[J]. 心理学报, 2023, 55(10): 1696-1711. |
[4] | 许丽颖, 喻丰, 彭凯平. 算法歧视比人类歧视引起更少道德惩罚欲[J]. 心理学报, 2022, 54(9): 1076-1092. |
[5] | 姜平, 张丽华. 委屈可以求全吗?自我表现视角下职场排斥对个体绩效的影响机制[J]. 心理学报, 2021, 53(4): 400-412. |
[6] | 邓昕才, 何山, 吕萍, 周星, 叶一娇, 孟洪林, 孔雨柔. 职场排斥对员工家庭的溢出效应:归属需求和工作家庭区隔偏好的作用[J]. 心理学报, 2021, 53(10): 1146-1160. |
[7] | 王海波, 严鸣, 吴海波, 黎金荣, 王晓晖. 恶意报复还是认同驱动?新员工的角色社会化程度对其职场排斥行为的作用机制[J]. 心理学报, 2019, 51(1): 128-140. |
[8] | 龚少英, 上官晨雨, 翟奎虎, 郭雅薇. 情绪设计对多媒体学习的影响[J]. 心理学报, 2017, 49(6): 771-782. |
[9] | 王端旭, 曾恺, 郑显伟. 员工非伦理行为如何招致同事攻击:道义公正视角[J]. 心理学报, 2017, 49(6): 829-840. |
[10] | 刘笛;王海忠. 基于人性本真性的拟人化广告的负面情绪与态度 ——愧疚感的中介作用[J]. 心理学报, 2017, 49(1): 128-137. |
[11] | 冯文婷;汪涛;魏华;周南. 孤独让我爱上你:产品陈列对孤独个体产品偏好的影响[J]. 心理学报, 2016, 48(4): 398-409. |
[12] | 谢俊;严鸣. 积极应对还是逃避?主动性人格对职场排斥与组织公民行为的影响机制[J]. 心理学报, 2016, 48(10): 1314-1325. |
[13] | 刘小禹;刘军;许浚;吴蓉蓉. 职场排斥对员工主动性行为的影响机制 ——基于自我验证理论的视角[J]. 心理学报, 2015, 47(6): 826-836. |
[14] | 汪涛;谢志鹏;崔楠. 和品牌聊聊天 —— 拟人化沟通对消费者品牌态度影响[J]. 心理学报, 2014, 46(7): 987-999. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||