心理科学进展 ›› 2023, Vol. 31 ›› Issue (1): 60-77.doi: 10.3724/SP.J.1042.2023.00060
谢才凤1, 邬家骅1, 许丽颖2(), 喻丰1(), 张语嫣1, 谢莹莹3
收稿日期:
2022-02-19
出版日期:
2023-01-15
发布日期:
2022-10-13
通讯作者:
许丽颖,喻丰
E-mail:liyingxu@mail.tsinghua.edu.cn;psychpedia@whu.edu.cn
基金资助:
XIE Caifeng1, WU Jiahua1, XU Liying2(), YU Feng1(), ZHAND Yuyan1, XIE Yingying3
Received:
2022-02-19
Online:
2023-01-15
Published:
2022-10-13
Contact:
XU Liying,YU Feng
E-mail:liyingxu@mail.tsinghua.edu.cn;psychpedia@whu.edu.cn
摘要:
算法常用于决策, 但相较于由人类做出的决策, 即便内容相同, 算法决策更容易引起个体反应的分化, 此即算法决策趋避。趋近指个体认为算法的决策比人类的更加公平、含有更少的偏见和歧视、也更能信任和接受, 回避则与之相反。算法决策趋避的过程动机理论用以解释趋避现象, 归纳了人与算法交互所经历的原初行为互动、建立类社会关系和形成身份认同三个阶段, 阐述了各阶段中认知、关系和存在三种动机引发个体的趋避反应。未来研究可着眼于人性化知觉、群际感知对算法决策趋避的影响, 并以更社会性的视角来探究算法决策趋避的逆转过程和其他可能的心理动机。
中图分类号:
谢才凤, 邬家骅, 许丽颖, 喻丰, 张语嫣, 谢莹莹. (2023). 算法决策趋避的过程动机理论. 心理科学进展 , 31(1), 60-77.
XIE Caifeng, WU Jiahua, XU Liying, YU Feng, ZHAND Yuyan, XIE Yingying. (2023). The process motivation model of algorithmic decision-making approach and avoidance. Advances in Psychological Science, 31(1), 60-77.
[1] | 丹尼尔·卡尼曼, 奥利维耶·西博尼, 卡斯·R. 桑斯坦. (2021). 噪声, (李纾, 汪祚军, 魏子晗, 译). 浙江教育出版社. |
[2] | 许丽颖, 喻丰, 彭凯平. (2022). 算法歧视比人类歧视引起更少道德惩罚欲. 心理学报, 54(9), 1076-1092. |
[3] | 许丽颖, 喻丰, 邬家骅, 韩婷婷, 赵靓. (2017). 拟人化: 从 “它” 到 “他”. 心理科学进展, 25(11), 1942-1954. |
[4] | 喻丰, 许丽颖. (2020). 人工智能之拟人化. 西北师大学报: 社会科学版, 57(5), 52-60. |
[5] | 张语嫣, 许丽颖, 喻丰, 丁晓军, 邬家骅, 赵靓. (2022). 算法拒绝的三维动机理论. 心理科学进展, 30(5), 1093-1105. |
[6] |
Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416.
doi: 10.1111/ijsa.12306 URL |
[7] |
Alloy, L. B., Peterson, C., Abramson, L. Y., & Seligman, M. E. (1984). Attributional style and the generality of learned helplessness. Journal of Personality and Social Psychology, 46(3), 681-687.
pmid: 6707869 |
[8] |
Amiot, C. E., Sukhanova, K., & Bastian, B. (2020). Social identification with animals: Unpacking our psychological connection with other animals. Journal of Personality and Social Psychology, 118(5), 991-1017.
doi: 10.1037/pspi0000199 pmid: 31246065 |
[9] |
Anderson, C. J., & Singer, M. M. (2008). The sensitive left and the impervious right: Multilevel models and the politics of inequality, ideology, and legitimacy in Europe. Comparative Political Studies, 41(4-5), 564-599.
doi: 10.1177/0010414007313113 URL |
[10] | Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611-623. |
[11] | Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019). Americans and privacy: Concerned, confused, and feeling lack of control over their personal information. Pew Research Center: Internet, Science & Tech. Retrieved January 15, 2022, from https://policycommons.net/artifacts/616499/americans-and-privacy/1597152/on13Jun2022.CID:20.500.12592/hx524v |
[12] |
Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134-143.
doi: 10.1038/s41562-019-0762-8 pmid: 31659321 |
[13] | Badue, C., Guidolini, R., Carneiro, R. V., Azevedo, P., Cardoso, V. B., Forechi, A.,... de Souza, A. F. (2020). Self-driving cars: A survey. Expert Systems with Applications, 165, Article 113816. https://doi.org/10.1016/j.eswa.2020.113816 |
[14] | Bargh, J. A., Chaiken, S., Raymond, P., & Hymes, C. (1996). The automatic evaluation effect: Unconditional automatic attitude activation with a pronunciation task. Journal of Personality and Social Psychology, 32(1), 104-128. |
[15] |
Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38(2), 247-256.
doi: 10.1177/0146167211424291 pmid: 21980158 |
[16] |
Beane, M. (2019). Shadow learning: Building robotic surgical skill when approved means fail. Administrative Science Quarterly, 64(1), 87-123.
doi: 10.1177/0001839217751692 URL |
[17] | Benzell, S. G., Kotlikoff, L. J., LaGarda, G., & Sachs, J. D. (2015). Robots are us: Some economics of human replacement (No. w20941). National Bureau of Economic Research. Cambridge, MA. |
[18] | Berger, B., Adam, M., Rühr, A., & Benlian, A. (2021). Watch me improve—algorithm aversion and demonstrating the ability to learn. Business & Information Systems Engineering, 63(1), 55-68. |
[19] |
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
doi: S0010-0277(18)30208-7 pmid: 30107256 |
[20] | Bigman, Y. E., Gray, K., Waytz, A., Arnestad, M., & Wilson, D. (2020). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Expreimental Psychology: General, Advance online publication. https://doi.org/10.31234/osf.io/m3nrp |
[21] | Bigman, Y. E., Yam, K. C., Marciano, D., Reynolds, S. J., & Gray, K. (2021). Threat of racial and economic inequality increases preference for algorithm decision-making. Computers in Human Behavior, 122, Article 106859. https://doi.org/10.1016/j.chb.2021.106859 |
[22] |
Biró, P., van de Klundert, J., Manlove, D., Pettersson, W., Andersson, T., Burnapp, L., … Viana, A. (2021). Modelling and optimisation in European kidney exchange programmes. European Journal of Operational Research, 291(2), 447-456.
doi: 10.1016/j.ejor.2019.09.006 URL |
[23] |
Blanchard, D. C., Hynd, A. L., Minke, K. A., Minemoto, T., & Blanchard, R. J. (2001). Human defensive behaviors to threat scenarios show parallels to fear-and anxiety-related defense patterns of non-human mammals. Neuroscience & Biobehavioral Reviews, 25(7-8), 761-770.
doi: 10.1016/S0149-7634(01)00056-2 URL |
[24] |
Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127-151.
doi: 10.1016/j.obhdp.2006.07.001 URL |
[25] |
Bonanno, G. A., & Jost, J. T. (2006). Conservative shift among high-exposure survivors of the September 11th terrorist attacks. Basic and Applied Social Psychology, 28, 311-323.
doi: 10.1207/s15324834basp2804_4 URL |
[26] |
Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of Experimental Psychology: Applied, 27(2), 447-459.
doi: 10.1037/xap0000294 URL |
[27] |
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
doi: 10.1126/science.aaf2654 URL |
[28] | Bostrom, N. (2014). Superintelligence. New York: Oxford University Press. |
[29] |
Bradley-Geist, J. C., King, E. B., Skorinko, J., Hebl, M. R., & McKenna, C. (2010). Moral credentialing by association: The importance of choice and relationship closeness. Personality and Social Psychology Bulletin, 36(11), 1564-1575.
doi: 10.1177/0146167210385920 pmid: 20947773 |
[30] |
Brandt, M. J., & Reyna, C. (2011). The chain of being: A hierarchy of morality. Perspectives on Psychological Science, 6(5), 428-446.
doi: 10.1177/1745691611414587 pmid: 26168195 |
[31] |
Brewer, M. B. (1991). The social self: On being the same and different at the same time. Personality and Social Psychology Bulletin, 17(5), 475-482.
doi: 10.1177/0146167291175001 URL |
[32] |
Brink, K. A., Gray, K., & Wellman, H. M. (2019). Creepiness creeps in: Uncanny valley feelings are acquired in childhood. Child Development, 90(4), 1202-1214.
doi: 10.1111/cdev.12999 pmid: 29236300 |
[33] | Bryson, J. J. (2020). The artificial intelligence of the ethics of artificial intelligence. In M. D. Dubber, F. Pasquale, & S. Das (Eds.). The oxford handbook of ethics of AI (pp. 3-25). New York: Oxford University press. |
[34] | Burch, J. (2018). AIBO robots dogs given buddhist funeral in Japan. Retrieved February 4, 2022, from https://www.nationalgeographic.com/travel/article/in-japan--a-buddhist-funeral-service-for-robot-dogs |
[35] | Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), Article 2053951715622512. https://doi.org/10.1177/2053951715622512 |
[36] |
Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
doi: 10.1002/bdm.2155 |
[37] |
Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636-1642.
doi: 10.1038/s41562-021-01146-0 pmid: 34183800 |
[38] |
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task- dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825.
doi: 10.1177/0022243719851788 |
[39] | Castelo, N., & Ward, A. F. (2021). Conservatism predicts aversion to consequential artificial intelligence. Plos One, 16(12), Article e0261467. https://doi.org/10.1371/journal.pone.0261467 |
[40] |
Castelvecchi, D. (2016). Can we open the black box of AI?. Nature, 538(7623), 20-23.
doi: 10.1038/538020a URL |
[41] | Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), Article 100698. https://doi.org/10.1016/j.hrmr.2019.100698 |
[42] |
Choi, I., Koo, M., & Choi, J. A. (2007). Individual differences in analytic versus holistic thinking. Personality and Social Psychology Bulletin, 33(5), 691-705.
pmid: 17440200 |
[43] |
Chung, S., & Cho, H. (2017). Fostering parasocial relationships with celebrities on social media: Implications for celebrity endorsement. Psychology & Marketing, 34(4), 481-495.
doi: 10.1002/mar.21001 URL |
[44] | Cooley, E., Brown-Iannuzzi, J., & Cottrell, D. (2019). Liberals perceive more racism than conservatives when police shoot Black men—But, reading about White privilege increases perceived racism, and shifts attributions of guilt, regardless of political ideology. Journal of Experimental Social Psychology, 85, Article 103885. https://doi.org/10.1016/j.jesp.2019.103885 |
[45] |
Cooley, E., & Payne, B. K. (2017). Using groups to measure intergroup prejudice. Personality and Social Psychology Bulletin, 43(1), 46-59.
doi: 10.1177/0146167216675331 pmid: 28903648 |
[46] | Cooley, E., & Payne, B. K. (2019). A group is more than the average of its parts: Why existing stereotypes are applied more to the same individuals when viewed in groups than when viewed alone. Group Processes & Intergroup Relations, 22(5), 673-687. |
[47] |
Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108(2), 353-380.
doi: 10.1016/j.cognition.2008.03.006 pmid: 18439575 |
[48] | Davis, M. (1973). Intimate Relations. New York, NY: Free Press. |
[49] |
Diab, D. L., Pui, S. Y., Yankelevich, M., & Highhouse, S. (2011). Lay perceptions of selection decision aids in US and non-US samples. International Journal of Selection and Assessment, 19(2), 209-216.
doi: 10.1111/j.1468-2389.2011.00548.x URL |
[50] |
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126.
doi: 10.1037/xge0000033 URL |
[51] |
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170.
doi: 10.1287/mnsc.2016.2643 URL |
[52] |
Dixon, G., Hart, P. S., Clarke, C., O’Donnell, N. H., & Hmielowski, J. (2020). What drives support for self-driving car technology in the United States?. Journal of Risk Research, 23(3), 275-287.
doi: 10.1080/13669877.2018.1517384 URL |
[53] | Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019, March) Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275-285). California. |
[54] | Donnelly, L. (2017). Forget your GP, robots will ‘soon be able to diagnose more accurately than almost any doctor’. The Telegraph. Retrieved October 21, 2021, from https://www.telegraph.co.uk/technology/2017/03/07/robots-will-soon-able-diagnose-accurately-almost-doctor/ |
[55] | Ehsan, U., & Riedl, M. (2019). On design and evaluation of human-centered explainable AI systems. Glasgow ’19. Scotland, American. |
[56] |
Elliot, A. J. (1999). Approach and avoidance motivation and achievement goals. Educational Psychologist, 34(3), 169-189.
doi: 10.1207/s15326985ep3403_3 URL |
[57] |
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
doi: 10.1037/0033-295X.114.4.864 pmid: 17907867 |
[58] |
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115-118.
doi: 10.1038/nature21056 URL |
[59] |
Eyssel, F., & Kuchenbrandt, D. (2012). Social categorization of social robots: Anthropomorphism as a function of robot group membership. British Journal of Social Psychology, 51(4), 724-731.
doi: 10.1111/j.2044-8309.2011.02082.x pmid: 22103234 |
[60] |
Ferrari, F., Paladino, M. P., & Jetten, J. (2016). Blurring human-machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302.
doi: 10.1007/s12369-016-0338-y URL |
[61] | Filiz, I., Judek, J. R., Lorenz, M., & Spiwoks, M. (2021). Reducing algorithm aversion through experience. Journal of Behavioral and Experimental Finance, 31, Article 100524. https://doi.org/10.1016/j.jbef.2021.100524 |
[62] | Fiske, S. T., & Taylor, S. E. (1991). Social cognition. New York: Mcgraw-Hill Book Company. |
[63] |
Gaertner, S. L., Dovidio, J. F., Anastasio, P. A., Bachman, B. A., & Rust, M. C. (1993). The common ingroup identity model: Recategorization and the reduction of intergroup bias. European Review of Social Psychology, 4(1), 1-26.
doi: 10.1080/14792779343000004 URL |
[64] |
Gary, M. S., & Wood, R. E. (2011). Mental models, decision rules, and performance heterogeneity. Strategic Management Journal, 32(6), 569-594.
doi: 10.1002/smj.899 URL |
[65] |
Gauchat, G. (2012). Politicization of science in the public sphere: A study of public trust in the United States, 1974 to 2010. American Sociological Review, 77(2), 167-187.
doi: 10.1177/0003122412438225 URL |
[66] | Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge: Cambridge university press. |
[67] |
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14, 627-660.
doi: 10.5465/annals.2018.0057 URL |
[68] |
Gogoll, J., & Uhl, M. (2018). Rage against the machine: Automation in the moral domain. Journal of Behavioral and Experimental Economics, 74, 97-103.
doi: 10.1016/j.socec.2018.04.003 URL |
[69] |
Graefe, A., Haim, M., Haarmann, B., & Brosius, H. B. (2018). Readers’ perception of computer-generated news: Credibility, expertise, and readability. Journalism, 19(5), 595-610.
doi: 10.1177/1464884916641269 URL |
[70] |
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029-1046.
doi: 10.1037/a0015141 pmid: 19379034 |
[71] |
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
pmid: 17272713 |
[72] |
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
doi: 10.1016/j.cognition.2012.06.007 pmid: 22784682 |
[73] |
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
doi: 10.1080/1047840X.2012.651387 pmid: 22754268 |
[74] | Grgić-Hlača, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human- Computer Interaction, 3(CSCW),1-25. |
[75] | Grgić-Hlača, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018 April). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. InProceedings of the 2018 World Wide Web Conference (pp. 903-912). Geneva. |
[76] | Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2018 February). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). New Orleans, Louisiana. |
[77] |
Haenssle, H. A., Fink, C., Schneiderbauer, R., Toberer, F., Buhl, T., Blum, A.,... Zalaudek, I. (2018). Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology, 29(8), 1836-1842.
doi: S0923-7534(19)34105-5 pmid: 29846502 |
[78] | Han, S., & Yang, H. (2018). Understanding adoption of intelligent personal assistants: A parasocial relationship perspective. Industrial Management & Data Systems, 118(3), 618-636. |
[79] |
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta- analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.
pmid: 22046724 |
[80] | Hartmann, T. (2008). Parasocial interactions and paracommunication with new media characters. In E. A. Konijn, S. Utz, M. Tanis, & S. B. Barnes (Eds.), Mediated Interpersonal Communication (pp. 177-199). New York, NY: Routledge. |
[81] | Harvey, C. R., Rattray, S., Sinclair, A., & van Hemert, O. (2017). Man vs. machine: Comparing discretionary and systematic hedge fund performance. The Journal of Portfolio Management, 43(4), 55-69. |
[82] |
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252-264.
doi: 10.1207/s15327957pspr1003_4 pmid: 16859440 |
[83] | Headrick, D. R. (2009). Technology: A world history. New York: Oxford University Press. |
[84] |
Hehman, E., Flake, J. K., & Calanchini, J. (2018). Disproportionate use of lethal force in policing is associated with regional racial biases of residents. Social Psychological and Personality Science, 9(4), 393-401.
doi: 10.1177/1948550617711229 URL |
[85] | Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, Article 105456. https://doi.org/10.1016/j.clsr.2020.105456 |
[86] |
Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry, 19(3), 215-229.
doi: 10.1080/00332747.1956.11023049 URL |
[87] | Howard, F. M., Gao, C. A., & Sankey, C. (2020). Implementation of an automated scheduling tool improves schedule quality and resident satisfaction. Plos One, 15(8), Article e0236952. https://doi.org/10.1371/journal.pone.0236952 |
[88] | Hristova, E., & Grinberg, M. (2015). Should robots kill? Moral judgments for actions of artificial cognitive agents. In Proceedings of Euro Asian Pacific Joint Conference on Cognitive Science (pp. 306-311). Torino, Italy. |
[89] | Hristova, E., & Grinberg, M. (2016). Should moral decisions be different for human and artificial cognitive agents. In Proceedings of the 38th Annual Conference of the Cognitive Science Society (pp. 1511-1516). Austin, TX. |
[90] |
Hu, P. J. H., Hu, H. F., & Fang, X. (2017). Examining the mediating roles of cognitive load and performance outcomes in user satisfaction with a website: A field quasi-experiment. MIS Quarterly, 41(3), 975-987.
doi: 10.25300/MISQ/2017/41.3.14 URL |
[91] |
Huang, H. L., Cheng, L. K., Sun, P. C., & Chou, S. J. (2021). The effects of perceived identity threat and realistic threat on the negative attitudes and usage intentions toward hotel service robots: The moderating effect of the robot’s anthropomorphism. International Journal of Social Robotics, 13(7), 1599-1611.
doi: 10.1007/s12369-021-00752-2 URL |
[92] | Iacoviello, V., & Spears, R. (2021). Playing to the gallery: Investigating the normative explanation of ingroup favoritism by testing the impact of imagined audience. Self and Identity. Advance online publication. https://doi.org/10.1080/15298868.2021.1933582 |
[93] |
Ireland, L. (2020). Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors. Journal of Crime and Justice, 43(2), 174-192.
doi: 10.1080/0735648X.2019.1655781 |
[94] |
Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38-56.
doi: 10.5465/amd.2017.0002 URL |
[95] |
Jago, A. S., & Laurin, K. (2021). Assumptions about algorithms’ capacity for discrimination. Personality and Social Psychology Bulletin, 48(4), 582-595.
doi: 10.1177/01461672211016187 URL |
[96] |
Jia, C., & Liu, R. (2021). Algorithmic or human source? Examining relative hostile media effect with a transformer- based framework. Media and Communication, 9(4), 170-181.
doi: 10.17645/mac.v9i4.4164 URL |
[97] |
Jost, J. T., & Amodio, D. M. (2012). Political ideology as motivated social cognition: Behavioral and neuroscientific evidence. Motivation and Emotion, 36(1), 55-64.
doi: 10.1007/s11031-011-9260-7 URL |
[98] |
Jost, J. T., Federico, C. M., & Napier, J. L. (2009). Political ideology: Its structure, functions, and elective affinities. Annual Review of Psychology, 60, 307-337.
doi: 10.1146/annurev.psych.60.110707.163600 pmid: 19035826 |
[99] |
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129(3), 339-375.
doi: 10.1037/0033-2909.129.3.339 pmid: 12784934 |
[100] |
Jost, J. T., Ledgerwood, A., & Hardin, C. D. (2008a). Shared reality, system justification, and the relational basis of ideological beliefs. Social and Personality Psychology Compass, 2(1), 171-186.
doi: 10.1111/j.1751-9004.2007.00056.x URL |
[101] |
Jost, J. T., Nosek, B. A., & Gosling, S. D. (2008b). Ideology: Its resurgence in social, personality, and political psychology. Perspectives on Psychological Science, 3(2), 126-136.
doi: 10.1111/j.1745-6916.2008.00070.x URL |
[102] |
Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5), 1449-1475.
doi: 10.1257/000282803322655392 URL |
[103] | Kaibel, C., Koch-Bayram, I., Biemann, T., & Mühlenbock, M. (2019). Applicant perceptions of hiring algorithms- uniqueness and discrimination experiences as moderators. In Academy of Management Proceedings (Vol. 2019, No. 1, p. 18172). Briarcliff Manor, NY 10510: Academy of Management. |
[104] |
Kay, A. C., & Eibach, R. P. (2013). Compensatory control and its implications for ideological extremism. Journal of Social Issues, 69(3), 564-585.
doi: 10.1111/josi.12029 URL |
[105] |
Kay, A. C., Whitson, J. A., Gaucher, D., & Galinsky, A. D. (2009). Compensatory control: Achieving order through the mind, our institutions, and the heavens. Current Directions in Psychological Science, 18(5), 264-268.
doi: 10.1111/j.1467-8721.2009.01649.x URL |
[106] | Komatsu, T. (2016, March). Japanese students apply same moral norms to humans and robot agents: Considering a moral HRI in terms of different cultural and academic backgrounds. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 457-458). Christchurch, New Zealand. |
[107] |
Kozyreva, A., Lorenz-Spreen, P., Hertwig, R., Lewandowsky, S., & Herzog, S. M. (2021). Public attitudes towards algorithmic personalization and use of personal data online: Evidence from Germany, Great Britain, and the United States. Humanities and Social Sciences Communications, 8(1), 1-11.
doi: 10.1057/s41599-020-00684-8 URL |
[108] | Kramer, M. F., Schaich Borg, J., Conitzer, V., & Sinnott- Armstrong, W. (2018, December). When do people want AI to make decisions?. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 204-209). New Orleans, Louisiana. |
[109] | Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633-704. |
[110] | Krupp, M. M., Rueben, M., Grimm, C. M., & Smart, W. D. (2017, March). Privacy and telepresence robotics: What do non-scientists think?. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (pp. 175-176). Vienna, Austria. |
[111] |
Kugler, M., Jost, J. T., & Noorbaloochi, S. (2014). Another look at moral foundations theory: Do authoritarianism and social dominance orientation explain liberal-conservative differences in “moral” intuitions?. Social Justice Research, 27(4), 413-431.
doi: 10.1007/s11211-014-0223-5 URL |
[112] | Langer, M., König, C. J., Back, C., & Hemsing, V. (2021). Trust in artificial intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. PsyArXiv Prepints. Advance online publication. https://doi.org/10.31234/osf.io/r9y3t |
[113] |
Langer, M., König, C. J., & Fitili, A. (2018). Information as a double-edged sword: The role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Computers in Human Behavior, 81, 19-30.
doi: 10.1016/j.chb.2017.11.036 URL |
[114] |
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217-234.
doi: 10.1111/ijsa.12246 URL |
[115] | Lasota, P. A., Fong, T., & Shah, J. A. (2017). A survey of methods for safe human-robot interaction. Foundations and Trends®in Robotics, 5(4), 261-349. |
[116] | Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1-16. |
[117] | Lee, M. K., & Baykal, S. (2017, February). Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 Acm Conference on Computer Supported Cooperative Work and Social Computing (pp. 1035-1048). New York, USA. |
[118] | Lee, M. K., Jain, A., Cha, H. J., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW),1-26. |
[119] | Leyer, M., & Schneider, S. (2019, June). Me, you, or AI? How do we feel about delegation. In Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm & Uppsala, Sweden. |
[120] | Li, J., Zhao, X., Cho, M. J., Ju, W., & Malle, B. F. (2016). From trolley to autonomous vehicle: Perceptions of responsibility and moral norms in traffic accidents with self-driving cars. SAE Technical Paper, 10, 1-8. |
[121] | Lima, G., Kim, C., Ryu, S., Jeon, C., & Cha, M. (2020). Collecting the public perception of AI and robot rights. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-24. |
[122] |
Lindebaum, D., Vesa, M., & den Hond, F. (2020). Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45(1), 247-263.
doi: 10.5465/amr.2018.0181 URL |
[123] |
Liu, B., & Wei, L. (2019). Machine authorship in situ: Effect of news organization and news genre on news credibility. Digital Journalism, 7(5), 635-657.
doi: 10.1080/21670811.2018.1510740 URL |
[124] | Logg, J. M. (2017). Theory of machine: When do people rely on algorithms?. Harvard Business School working paper. 17-86. Advance online publication. https://dash.harvard.edu/handle/1/31677474 |
[125] |
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
doi: 10.1016/j.obhdp.2018.12.005 |
[126] |
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
doi: 10.1093/jcr/ucz013 |
[127] |
Madhavan, P., & Wiegmann, D. A. (2007). Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Human Factors, 49(5), 773-785.
pmid: 17915596 |
[128] | Mahadevan, K., Somanath, S., & Sharlin, E. (2018, March). "Fight-or-flight" leveraging instinctive human defensive behaviors for safe human-robot interaction. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 183-184). Chicago, USA. |
[129] | Makatchev, M., Simmons, R., Sakr, M., & Ziadee, M. (2013, March). Expressing ethnicity through behaviors of a robot character. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 357-364). Tokyo, Japan. |
[130] |
Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 18(4), 243-256.
doi: 10.1007/s10676-015-9367-8 URL |
[131] | Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)(pp. 117- 124). New York, American. |
[132] | Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125-132). Christchurch, New Zealand. |
[133] | Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020, January). Implications of AI (un-) fairness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice, and organizational reputation. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(pp. 122-130). Barcelona, Spain. |
[134] | Martínez, E., & Winter, C. (2021). Protecting sentient artificial intelligence: A survey of lay intuitions on standing, personhood, and general legal protection. Frontiers in Robotics and AI, 8, Article 788355. https://doi.org/10.3389/frobt.2021.788355 |
[135] |
McClure, P. K. (2018). “You’re fired,” says the robot: The rise of automation in the workplace, technophobes, and fears of unemployment. Social Science Computer Review, 36(2), 139-156.
doi: 10.1177/0894439317698637 URL |
[136] | McFarland, M. (2014). Elon Musk: ‘With artificial intelligence we are summoning the demon’. The Washington Post. Retrieved June 19, 2021, from https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/?utm_term=.02d648908751 |
[137] |
Meehl, P. E. (1986). Causes and effects of my disturbing little book. Journal of Personality Assessment, 50(3), 370-375.
pmid: 3806342 |
[138] |
Miller, S. M., & Keiser, L. R. (2021). Representative bureaucracy and attitudes toward automated decision making. Journal of Public Administration Research and Theory, 31(1), 150-165.
doi: 10.1093/jopart/muaa019 URL |
[139] | Möhlmann, M., & Zalmanson, L. (2017, December). Hands on the wheel: Navigating algorithmic management and Uber drivers’. In Autonomy’, in Proceedings of the International Conference on Information Systems (ICIS) (pp.10-13). Seoul, Korea. |
[140] |
Napier, J. L., & Jost, J. T. (2008). Why are conservatives happier than liberals?. Psychological Science, 19(6), 565-572.
doi: 10.1111/j.1467-9280.2008.02124.x pmid: 18578846 |
[141] |
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
doi: 10.1111/0022-4537.00153 URL |
[142] |
Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates?. International Journal of Human-Computer Studies, 45(6), 669-678.
doi: 10.1006/ijhc.1996.0073 URL |
[143] |
Nass, C., Moon, Y., & Green, N. (1997). Are machines gender neutral? Gender‐stereotypic responses to computers with voices. Journal of Applied Social Psychology, 27(10), 864-876.
doi: 10.1111/j.1559-1816.1997.tb00275.x URL |
[144] |
Neal-Barnett, A., Stadulis, R., Singer, N., Murray, M., & Demmings, J. (2010). Assessing the effects of experiencing the acting White accusation. The Urban Review, 42(2), 102-122.
doi: 10.1007/s11256-009-0130-5 URL |
[145] |
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149-167.
doi: 10.1016/j.obhdp.2020.03.008 URL |
[146] |
Nisbett, R. E., Peng, K., Choi, I., & Norenzayan, A. (2001). Culture and systems of thought: Holistic versus analytic cognition. Psychological Review, 108(2), 291-310.
pmid: 11381831 |
[147] | Niszczota, P., & Kaszás, D. (2020). Robo-investment aversion. Plos One, 15(9), Article e0239277. https://doi.org/10.1371/journal.pone.0239277 |
[148] | Noble, S. M., Foster, L. L., & Craig, S. B. (2021). The procedural and interpersonal justice of automated application and resume screening. International Journal of Selection and Assessment. Advance online publication. https://doi.org/10.1111/ijsa.12320 |
[149] | Nolan, K. P., Carter, N. T., & Dalal, D. K. (2016). Threat of technological unemployment: Are hiring managers discounted for using standardized employee selection practices?. Personnel Assessment and Decisions, 2(1), 34-47. |
[150] | Nolan, K. P., Dalal, D. K., & Carter, N. (2020). Threat of technological unemployment, use intentions, and the promotion of structured interviews in personnel selection. Personnel Assessment and Decisions, 6(2), 38-53. |
[151] | Nørskov, S., Damholdt, M. F., Ulhøi, J. P., Jensen, M. B., Ess, C., & Seibt, J. (2020). Applicant fairness perceptions of a robot-mediated job interview: A video vignette-based experimental survey. Frontiers in Robotics and AI, 163, Article 586263. https://doi.org/10.3389/frobt.2020.586263 |
[152] | Oleksy, T., & Wnuk, A. (2021). Do women perceive sex robots as threatening? The role of political views and presenting the robot as a female-vs male-friendly product. Computers in Human Behavior, 117, Article 106664. https://doi.org/10.1016/j.chb.2020.106664 |
[153] |
Önkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). The relative influence of advice from human experts and statistical methods on forecast adjustments. Journal of Behavioral Decision Making, 22(4), 390-409.
doi: 10.1002/bdm.637 URL |
[154] |
Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27-39.
doi: 10.1016/j.chb.2018.07.022 URL |
[155] |
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253.
doi: 10.1518/001872097778543886 URL |
[156] |
Peng, Y. (2020). The ideological divide in public perceptions of self-driving cars. Public Understanding of Science, 29(4), 436-451.
doi: 10.1177/0963662520917339 pmid: 32434459 |
[157] |
Pezzo, M. V., & Pezzo, S. P. (2006). Physician evaluation after medical errors: Does having a computer decision aid help or hurt in hindsight? Medical Decision Making, 26(1), 48-56.
pmid: 16495200 |
[158] |
Prahl, A., & van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted?. Journal of Forecasting, 36(6), 691-702.
doi: 10.1002/for.2464 URL |
[159] |
Promberger, M., & Baron, J. (2006). Do patients trust computers?. Journal of Behavioral Decision Making, 19(5), 455-468.
doi: 10.1002/bdm.542 URL |
[160] | Radinsky, W. (2015). Robotics, AI, the luddite fallacy and the future of the job market. In B. Goertzel & T. Goertzel (Eds.), The end of the beginning: Life, society and economy on the brink of the singularity (pp. 159-185). Los Angeles, CA: Humanity+ Press. |
[161] |
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192-210.
doi: 10.5465/amr.2018.0072 URL |
[162] | Reinecke, M. G., Wilks, M., & Bloom, P. (2021, July). Developmental changes in perceived moral standing of robots. In Proceedings of the 43nd Annual Meeting of the Cognitive Science Society(Vol. 43, No. 43). Vienna, Austria. |
[163] |
Riek, B. M., Mania, E. W., & Gaertner, S. L. (2006). Intergroup threat and outgroup attitudes: A meta-analytic review. Personality and Social Psychology Review, 10(4), 336-353.
pmid: 17201592 |
[164] | Rubin, R. B., & Rubin, A. M. (2001). Attribution in social and parasocial relationships. In M. A. Fitzpatrick, H. Reis, & A. Vangelista (Eds.), Attribution, communication behavior, and close relationships (pp. 320-337). Cambridge: Cambridge University Press. |
[165] | Saha, D., Schumann, C., Mcelfresh, D., Dickerson, J., Mazurek, M., & Tschantz, M. (2020, November). Measuring non-expert comprehension of machine learning fairness metrics. In Proceedings of the 37th International Conference on Machine Learning(8377-8387). PMLR. |
[166] |
Saunderson, S., & Nejat, G. (2019). How robots influence humans: A survey of nonverbal communication in social human-robot interaction. International Journal of Social Robotics, 11(4), 575-608.
doi: 10.1007/s12369-019-00523-0 URL |
[167] | Scheutz, M., & Malle, B. F. (2021). May machines take lives to save lives? Human perceptions of autonomous robots (with the capacity to kill). In J. Galliot, D. MacInosh, & J. D. Ohlin (Eds.), Lethal autonomous weapons: Re-examining the law and ethics of robotic warfare (pp. 89-102). New York: Oxford University Press. |
[168] | Schlicker, N., Langer, M., Ötting, S., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘Black Boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, Article 106837. https://doi.org/10.1016/j.chb.2021.106837 |
[169] | Schneirla, T. C. (1959). An evolutionary and developmental theory of biphasic processes underlying approach and withdrawal. In M. R. Jones (Ed.), Nebraska symposium on motivation (pp. 1-42). Univer: Nebraska Press. |
[170] | Schoeffer, J., Machowski, Y., & Kuehl, N. (2021). Perceptions of fairness and trustworthiness based on explanations in human vs. automated decision-making. arXiv preprint arXiv:2109.05792. |
[171] | Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565. |
[172] |
Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277-284.
doi: 10.1016/j.chb.2019.04.019 URL |
[173] | Silva, S., & Kenney, M. (2018). Algorithms, platforms, and ethnic bias: An integrative essay. Phylon, 55(1 & 2),9-37. |
[174] | Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A.,... Hassabis, D. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815. |
[175] | Sloan, R. H., & Warner, R. (2018). When is an algorithm transparent? Predictive analytics, privacy, and public policy. IEEE Security & Privacy, 16(3), 18-25. |
[176] |
Steffel, M., & Williams, E. F. (2018). Delegating decisions: Recruiting others to make choices we might regret. Journal of Consumer Research, 44(5), 1015-1032.
doi: 10.1093/jcr/ucx080 URL |
[177] |
Steffel, M., Williams, E. F., & Perrmann-Graham, J. (2016). Passing the buck: Delegating choices to others to avoid responsibility and blame. Organizational Behavior and Human Decision Processes, 135, 32-44.
doi: 10.1016/j.obhdp.2016.04.006 URL |
[178] |
Stern, B. B., Russell, C. A., & Russell, D. W. (2007). Hidden persuasions in soap operas: Damaged heroines and negative consumer effects. International Journal of Advertising, 26(1), 9-36.
doi: 10.1080/02650487.2007.11072994 URL |
[179] |
Stewart, B. D., & Morris, D. S. (2021). Moving morality beyond the in-group: Liberals and conservatives show differences on group-framed moral foundations and these differences mediate the relationships to perceived bias and threat. Frontiers in Psychology, 12, Article 579908. doi: 10.3389/fpsyg.2021.579908.
doi: 10.3389/fpsyg.2021.579908 |
[180] |
Stewart, B. D., Gulzaib, F., & Morris, D. S. (2019). Bridging political divides: Perceived threat and uncertainty avoidance help explain the relationship between political ideology and immigrant attitudes within diverse intergroup contexts. Frontiers in Psychology, 10, 1236-1254.
doi: 10.3389/fpsyg.2019.01236 pmid: 31258499 |
[181] |
Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes?. Computers in Human Behavior, 98, 93-101.
doi: 10.1016/j.chb.2019.04.012 URL |
[182] | Sundar, S. S. (2008). The MAIN model:A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & Flanagin, A. J (Eds.), Digital media, youth, and credibility (pp. 73-100). Cambridge, MA: The MIT Press. |
[183] | Sundar, S. S., & Kim, J. (2019, May) Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-9). Glasgow, Scotland. |
[184] | Sundar, S. S., Kim, J., Rosson, M. B., & Molina, M. D. (2020, April). Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-12). Honolulu. |
[185] |
Suresh, A., Latha, S. S., Nair, P., & Radhika, N. (2014). Prediction of fight or flight response using artificial neural networks. American Journal of Applied Sciences, 11(6), 912-920.
doi: 10.3844/ajassp.2014.912.920 URL |
[186] | Swiderska, A., & Küster, D. (2020). Robots as malevolent moral agents: Harmful behavior results in dehumanization, not anthropomorphism. Cognitive Science, 44(7), Article e12872. https://doi.org/10.1111/cogs.12872 |
[187] |
Tajfel, H. (1974). Social identity and intergroup behaviour. Social science information, 13(2), 65-93.
doi: 10.1177/053901847401300204 URL |
[188] | Tajfel, H. (Ed.). (2010). Social identity and intergroup relations (Vol. 7). Cambridge: Cambridge University Press. |
[189] | Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel & W. G. Austin (Eds.), Psychology of intergroup relations (pp. 7-24). Chicago, IL: Nelson-Hall. |
[190] |
Tandoc, E. C., Jr., Yao, L. J., & Wu, S. (2020). Man vs. machine? The impact of algorithm authorship on news credibility. Digital Journalism, 8(4), 548-562.
doi: 10.1080/21670811.2020.1762102 URL |
[191] |
Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2019). My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital Journalism, 7(4), 447-469.
doi: 10.1080/21670811.2018.1493936 |
[192] |
Todd, P., & Benbasat, I. (1994). The influence of decision aids on choice strategies: An experimental analysis of the role of cognitive effort. Organizational Behavior and Human Decision Processes, 60(1), 36-74.
doi: 10.1006/obhd.1994.1074 URL |
[193] |
Tran, G. A., Yazdanparast, A., & Strutton, D. (2019). Investigating the marketing impact of consumers’ connectedness to celebrity endorsers. Psychology & Marketing, 36(10), 923-935.
doi: 10.1002/mar.21245 URL |
[194] | Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., & Wetherell, M. S. (1987). Rediscovering the social group: A self-categorization theory. Oxford, UK: Basil Blackwell. |
[195] | Turner, R. N., West, K., & Christie, Z. (2013). Out‐group trust, intergroup anxiety, and out‐group attitude as mediators of the effect of imagined intergroup contact on intergroup behavioral tendencies. Journal of Applied Social Psychology, 43(S2), E196-E205. |
[196] | van Berkel, N., Goncalves, J., Russo, D., Hosio, S., & Skov, M. B. (2021, May). Effect of information presentation on fairness perceptions of machine learning predictors. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-13). Yokohama, Japan. |
[197] |
van der Woerdt, S., & Haselager, P. (2019). When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas in Psychology, 54, 93-100.
doi: 10.1016/j.newideapsych.2017.11.001 URL |
[198] |
van Vugt, M., & Hart, C. M. (2004). Social identity as social glue: The origins of group loyalty. Journal of Personality and Social Psychology, 86(4), 585-598.
pmid: 15053707 |
[199] | Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016, August). Moral judgments of human vs. robot agents. Moral judgments of human vs. robot agents. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 775-780). Christchurch, New Zarland. |
[200] | Waddell, T. F. (2019). Can an algorithm reduce the perceived bias of news? Testing the effect of machine attribution on news readers’ evaluations of bias, anthropomorphism, and credibility. Journalism & Mass Communication Quarterly, 96(1), 82-100. |
[201] | Wang, A. J. (2018). Procedural justice and risk-assessment algorithms. SSRN Electronic Journal. Article 3170136. http://dx.doi.org/10.2139/ssrn.3170136 |
[202] | Wang, R., Harper, F. M., & Zhu, H. (2020, April). Factors influencing perceived fairness in algorithmic decision- making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Honolulu. |
[203] |
Wang, S. (2021). Moderating uncivil user comments by humans or machines? The effects of moderation agent on perceptions of bias and credibility in news content. Digital Journalism, 9(1), 64-83.
doi: 10.1080/21670811.2020.1851279 URL |
[204] |
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.
doi: 10.1016/j.jesp.2014.01.005 URL |
[205] |
Wetherell, G. A., Brandt, M. J., & Reyna, C. (2013). Discrimination across the ideological divide: The role of value violations and abstract values in discrimination by liberals and conservatives. Social Psychological and Personality Science, 4(6), 658-667.
doi: 10.1177/1948550613476096 URL |
[206] |
Whang, C., & Im, H. (2021). "I like your suggestion!" The role of humanlikeness and parasocial relationship on the website versus voice shopper's perception of recommendations. Psychology & Marketing, 38(4), 581-595.
doi: 10.1002/mar.21437 URL |
[207] | Wilson, G. (1973). The psychology of conservatism (p. 277). Oxford, England: Academic Press. |
[208] |
Wölker, A., & Powell, T. E. (2021). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism, 22(1), 86-103.
doi: 10.1177/1464884918757072 URL |
[209] |
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403-414.
doi: 10.1002/bdm.2118 |
[210] |
Yogeeswaran, K., Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., & Ishiguro, H. (2016). The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Journal of Human-Robot Interaction, 5(2), 29-47.
doi: 10.5898/JHRI.5.2.Yogeeswaran URL |
[211] | Zajonc, R. (1998). Emotion. In Gilbert, D. T., Fiske, S. T., & Lindzey, G. (Eds.). The handbook of social psychology (Vol. 1, pp. 591-632). New York: Oxford University Press. |
[212] |
Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54.
doi: 10.1016/j.ijhcs.2016.12.008 URL |
[1] | 张语嫣, 许丽颖, 喻丰, 丁晓军, 邬家骅, 赵靓. 算法拒绝的三维动机理论[J]. 心理科学进展, 2022, 30(5): 1093-1105. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||