心理学报 ›› 2025, Vol. 57 ›› Issue (11): 2060-2082.doi: 10.3724/SP.J.1041.2025.2060 cstr: 32110.14.2025.2060
• 人工智能心理与治理专刊 • 上一篇
收稿日期:2024-01-26
发布日期:2025-09-24
出版日期:2025-11-25
通讯作者:
喻丰, E-mail: psychpedia@whu.edu.cn作者简介:#许丽颖和赵一骏为本文共同第一作者。
基金资助:
XU Liying,#, ZHAO Yijun,#, YU Feng(
)
Received:2024-01-26
Online:2025-09-24
Published:2025-11-25
摘要:
人工智能(Artificial Intelligence, AI)技术的蓬勃发展引起了组织中的巨大变革, 其担当起能够直接影响员工行为的主管角色。6项递进的情境实验(N = 1642)试图探讨人们对由AI或人类主管提出道德行为建议的反应差异, 以及其心理机制和边界条件。结果发现:相比于人类主管, 人们对AI主管提出道德行为建议的遵从程度更低(实验1a~5), 这是因为人们对与AI主管的互动存在更低的评价忧虑(实验2~3), 而且当个体拟人化倾向越强或AI主管越拟人化时, 人们对AI主管提出的道德行为建议越遵从(实验4~5)。研究结果有助于更好地理解人们对组织中AI主管的反应, 并说明了AI主管在涉及道德引导领域的欠缺, 为组织管理中AI领导力的部署提供实践参考和提升方案。
中图分类号:
许丽颖, 赵一骏, 喻丰. (2025). 人工智能主管提出的道德行为建议更少被遵从. 心理学报, 57(11), 2060-2082.
XU Liying, ZHAO Yijun, YU Feng. (2025). Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human. Acta Psychologica Sinica, 57(11), 2060-2082.
| [1] | Aguinis, H., & Bradley, K. J. (2014). Best practice recommendations for designing and implementing experimental vignette methodology studies. Organizational Research Methods, 17(4), 351-371. |
| [2] | Aguinis, H., & Glavas, A. (2012). What we know and don’t know about corporate social responsibility: A review and research agenda. Journal of Management, 38(4), 932-968. |
| [3] | Andrews, D., Bonta, J., & Wormith, J. (2006). The recent past and near future of risk and/or need assessment. Crime and Delinquency, 52(1), 7-27. |
| [4] | Arango, L., Singaraju, P. S., & Niininen, O. (2023). Consumer responses to AI-generated charitable giving ads. Journal of Advertising, 52(4), 486-530. |
| [5] |
Bailey, P. E., Leon, T., Elner, N. C., Moustafa, A. A., & Weidemann, G. (2023). A meta-analysis of the weight of advice in decision-making. Current Psychology, 42, 24516-24541.
doi: 10.1007/s12144-022-03573-2 pmid: 39711945 |
| [6] |
Banach, M., Lewek, J., Surma, S., Person, P. E., Sahebkar, A., Martin, S. S.,... Bytyçi, I. (2023). The association between daily step count and all-cause and cardiovascular mortality: A meta-analysis. European Journal of Preventive Cardiology, 30(18), 1975-1985.
doi: 10.1093/eurjpc/zwad229 pmid: 37555441 |
| [7] |
Barclay, P., & Willer, R. (2007). Partner choice creates competitive altruism in humans. Proceedings of the Royal Society B: Biological Sciences, 274(1610), 749-753.
pmid: 17255001 |
| [8] | Barnes, C. M., Lucianetti, L., Bhave, D. P., & Christian, M. S. (2015). “You wouldn’t like me when I’m sleepy”: Leaders’ sleep, daily abusive supervision, and work unit engagement. Academy of Management Journal, 58(5), 1419-1437. |
| [9] | Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323-370. |
| [10] | Beijer, S., Peccei, R., van Veldhoven, M., & Paauwe, J. (2021). The turn to employees in the measurement of human resource practices: A critical review and proposed way forward. Human Resource Management Journal, 31(1), 1-17. |
| [11] |
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
doi: S0010-0277(18)30208-7 pmid: 30107256 |
| [12] | Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4-27. |
| [13] |
Bigman, Y., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365-368.
doi: S1364-6613(19)30063-4 pmid: 30962074 |
| [14] | Bjugstad, K., Thach, E. C., Thompson, K. J., & Morris, A. A. (2006). A fresh look at followership: A model for matching followership and leadership styles. Journal of Behavioral & Applied Management, 7(3), 304-319. |
| [15] |
Blair, A., & Saffidine, A. (2019). AI surpasses humans at six-player poker. Science, 365(6456), 864-865.
doi: 10.1126/science.aay7774 pmid: 31467208 |
| [16] | Bolino, M. C., & Grant, A. M. (2016). The bright side of being prosocial at work, and the dark side, too: A review and agenda for research on other-oriented motives, behavior, and impact in organizations. The Academy of Management Annals, 10(1), 599-670. |
| [17] | Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101, 127-151. |
| [18] | Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of Experimental Psychology: Applied, 27(2), 447-459. |
| [19] | Bonnefon, J., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576. |
| [20] | Bordia, P., Irmer, B. E., & Abusah, D. (2006). Differences in sharing knowledge interpersonally and via databases: The role of evaluation apprehension and perceived benefits. European Journal of Work and Organizational Psychology, 15(3), 262-280. |
| [21] | Bostrom,, N., & Yudkowsky, E. (2011). The ethics of Artificial Intelligence. In K.Frankish (Ed.). Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press. |
| [22] |
Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68, 627-652.
doi: 10.1146/annurev-psych-010416-043958 pmid: 27648986 |
| [23] | Broom, D. M. (2006). The evolution of morality. Applied Animal Behaviour Science, 100(1-2), 20-28. |
| [24] |
Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications: Profound change is coming, but roles for humans remain. Science, 358(6370), 1530-1534.
doi: 10.1126/science.aap8062 pmid: 29269459 |
| [25] |
Bucher, E., Fieseler, C., & Lutz, C. (2019). Mattering in digital labor. Journal of Managerial Psychology, 34(4), 307-324.
doi: 10.1108/JMP-06-2018-0265 |
| [26] |
Burger, J. M., Messian, N., Patel, S., del Prado, A., & Anderson, C. (2004). What a coincidence! The effects of incidental similarity on compliance. Personality and Social Psychology Bulletin, 30(1), 35-43.
pmid: 15030641 |
| [27] |
Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
doi: 10.1002/bdm.2155 |
| [28] | Chan, I. C. C., Lam, L. W., Chow, C. W., Fong, L. H. N., & Law, R. (2017). The effect of online reviews on hotel booking intention: The role of reader-reviewer similarity. International Journal of Hospitality Management, 66, 54-65. |
| [29] | Cram, W. A., Wiener, M., Tarafdar, M., & Benlian, A. (2022). Examining the impact of algorithmic control on Uber drivers’ technostress. Journal of Management Information Systems, 39(2), 426-453. |
| [30] | de Cremer, D. (2017). CC’ing the boss on email makes employees feel less trusted. Harvard Business Review. Retrieved January 25, 2024, from https://hbr.org/2017/04/ccing-the-boss-on-email-makes-employees-feel-less-trusted |
| [31] | de Cremer, D. (2020). Leadership by algorithm: Who leads and who follows in the AI era? Basingstoke, Hampshire: Harriman House. |
| [32] |
de Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological factors underlying attitudes towards AI tools. Nature Human Behaviour, 7, 1845-1854.
doi: 10.1038/s41562-023-01734-2 pmid: 37985913 |
| [33] |
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta- analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125(6), 627-668.
doi: 10.1037/0033-2909.125.6.627 pmid: 10589297 |
| [34] | Dietvorst, B. J., & Bartels, D. M. (2022). Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. Journal of Consumer Psychology, 32(3), 406-424. |
| [35] | Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. |
| [36] |
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and App-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114-132.
doi: 10.1111/1748-8583.12258 |
| [37] |
Ellemers, N., Toorn, J. V. D., Paunov, Y., & Leeuwen, T. V. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332-366.
doi: 10.1177/1088868318811759 pmid: 30658545 |
| [38] |
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
doi: 10.1037/0033-295X.114.4.864 pmid: 17907867 |
| [39] | Faraji-Rad, A., Samuelsen, B. M., & Warlop, L. (2015). On the persuasiveness of similar others: The role of mentalizing and the feeling of certainty. Journal of Consumer Research, 42(3), 458-471. |
| [40] |
Fast, N. J., & Schroeder, J. (2020). Power and decision making: New directions for research in the age of artificial intelligence. Current Opinion in Psychology, 33, 172-176.
doi: S2352-250X(19)30119-8 pmid: 31473586 |
| [41] |
Faul, F., Erdfelder, E., Buchner, A., & Lang, A-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160.
doi: 10.3758/BRM.41.4.1149 pmid: 19897823 |
| [42] |
Feinberg, M., Kovacheff, C., Teper, R., & Inbar, Y. (2019). Understanding the process of moralization: How eating meat becomes a moral issue. Journal of Personality and Social Psychology, 117(1), 50-72.
doi: 10.1037/pspa0000149 pmid: 30869989 |
| [43] | Ferrari, F., Paladino, M., & Jetten, J. (2016). Blurring human-machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. International Journal of Social Robotics, 8(2), 287-302. |
| [44] | Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting & Social Change, 114(1), 254-280. |
| [45] | Fronczek, L. P., Mende, M., Scott, M. L. Nenkov, G. Y., & Gustafsson, A. (2017). Friend or foe? Can anthropomorphizing self-tracking devices backfire on marketers and consumers? Journal of the Academy of Marketing Science, 51(5), 1075-1097. |
| [46] | Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437. |
| [47] |
Garland, H., & Brown, B. R. (1972). Face-saving as affected by subjects’sex, audiences’sex and audience expertise. Sociometry, 35(2), 280-289.
pmid: 5033657 |
| [48] | Garvey, A. M., Kim, T., & Duhachek, A. (2023). Bad news? Send an AI. Good news? Send a human. Journal of Marketing, 87(1), 10-25. |
| [49] | Geerts, J., de Wit, J., & de Rooji, A. (2021). Brainstorming with a social robot facilitator: Better than human facilitation due to reduced evaluation apprehension? Frontiers in Robotics and AI, 8, 657291. |
| [50] |
Gino, F., Brooks, A. W., & Schweitzer, M. E. (2012). Anxiety, advice, and the ability to discern: Feeling anxious motivates individuals to seek and use advice. Journal of Personality and Social Psychology, 102(3), 497-512.
doi: 10.1037/a0026413 pmid: 22121890 |
| [51] | Gino, F., Shang, J., & Croson, R. (2009). The impact of information from similar or different advisors on judgments. Organizational Behavior and Human Decision Processes, 108, 287-302. |
| [52] | Gladden, M. E. (2014). The social robot as “charismatic leader”: A phenomenology of human submission to nonhuman power. Frontiers in Artificial Intelligence and Applications, 273, 329-339. |
| [53] | Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. |
| [54] | Grag, S., Sinha, S., Kar, A., & Mani, M. (2022). A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management, 71(5), 1590-1610. |
| [55] | Gratch, J., & Fast, N. J. (2022). The power to harm: AI assistants pave the way to unethical behavior. Current Opinion in Psychology, 47, 101382. |
| [56] | Gray, H., Gray, K., & Wegner, D. (2007). Dimensions of mind perception. Science, 315(5812), 619. |
| [57] |
Gray, K., & Wegner, D. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
doi: 10.1016/j.cognition.2012.06.007 pmid: 22784682 |
| [58] |
Gray, K., & Wegner, D. M. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96(3), 505-520.
doi: 10.1037/a0013748 pmid: 19254100 |
| [59] |
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
doi: 10.1080/1047840X.2012.651387 pmid: 22754268 |
| [60] | Guzman, A. (2020). Ontological boundaries between humans and computers and the implications for human-machine communication. Human-Machine Communication, 1, 37-54. |
| [61] | Hagendorff, T. (2024). Deception abilities emerged in large language models. Proceedings of the National Academy of Sciences of the United States of America, 121(24), e2317967121. |
| [62] | Haidt, J. (2012). The righteous mind:Why good people are divided by politics and religion. New York, NY: Random House. |
| [63] | Halbusi, H. A., Ruiz-Palomino, P., Morales-Sánchez, R., & Fattah, F. A. M. A. (2021). Managerial ethical leadership, ethical climate and employee ethical behavior: Does moral attentiveness matter? Ethics & Behavior, 31(8), 604-627. |
| [64] |
Harkins, S. G. (2006). Mere effort as the mediator of the evaluation-performance relationship. Journal of Personality and Social Psychology, 91(3), 436-455.
pmid: 16938029 |
| [65] | Harvey, N., & Fischer, I. (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70, 117-133. |
| [66] | Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis:A regression-based approach. New York: Guilford Press. |
| [67] | Hertz, N., & Wiese, E. (2019). Good advice is beyond all price, but what if it comes from a machine? Journal of Experimental Psychology: Applied, 25(3), 386-395. |
| [68] | Hertz, S. G., & Krettenauer, T. (2016). Does moral identity effectively predict moral behavior: A meta-analysis. Review of General Psychology, 20(2), 129-140. |
| [69] | Hoch, J. E., Bommer, W. H., Dulebohn, J. H., & Wu, D. (2018). Do ethical, authentic, and servant leadership explain variance above and beyond transformational leadership? A meta-analysis. Journal of Management, 44(2), 501-529. |
| [70] | Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116, 106635. |
| [71] | Holford, W. (2022). An ethical inquiry of the effect of cockpit automation on the responsibilities of airline pilots: Dissonance or meaningful control? Journal of Business Ethics, 176(1), 141-157. |
| [72] | Holthöwer, J., & van Doorn, J. (2023). Robots do not judge: Service robots can alleviate embarrassment in service encounters. Journal of the Academy of Marketing Science, 51, 767-784. |
| [73] | Hu, X., Li, M., Wang, D., & Yu, F. (2024). Reactions to immoral AI decisions: The moral deficit effect and its underlying mechanism. Chinese Science Bulletin, 69(11), 1406-1416. |
| [胡小勇, 李穆峰, 王笛新, 喻丰. (2024). 人工智能决策的道德缺失效应及其机制. 科学通报, 69(11), 1406-1416.] | |
| [74] | Hur, J. D., Koo, M., & Hofmann, M. (2015). When temptations come alive: How anthropomorphism undermines self-control. Journal of Consumer Research, 42(2), 340-358. |
| [75] | Inesi, M. E., Adams, G. S., & Gupta, A. (2021). When it pays to be kind: The allocation of indirect reciprocity within power hierarchies. Organizational Behavior and Human Decision Processes, 165, 115-126. |
| [76] | Ivancevich, J. M., Konopaske, R., & Matteson, M. T. (2005). Organizational behavior and management (7th ed.). Boston: McGraw-Hill Irvin. |
| [77] |
Jabagi, N., Croteau, A. M., Audebrand, L. K., & Marsan, J. (2019). Gig-workers’ motivation: Thinking beyond carrots and sticks. Journal of Managerial Psychology, 34(4), 192-213.
doi: 10.1108/JMP-06-2018-0255 |
| [78] | Jackson, J., Yam, K., Tang, P., Liu, T., & Shariff, A. (2023). Exposure to robot preachers undermines religious commitment. Journal of Experimental Psychology: General, 152(12), 3344-3358. |
| [79] | Jago, A. S., Raveendhran, R., Fast, N., & Gratch, J. (2024). Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles. Journal of Experimental Social Psychology, 110, 104553. |
| [80] |
Janoff-Bulman, R., Sheikh, S., & Hepp, S. (2009). Proscriptive versus prescriptive morality: Two faces of moral regulation. Journal of Personality and Social Psychology, 96(3), 521-537.
doi: 10.1037/a0013779 pmid: 19254101 |
| [81] | Jarrassé, N., Sanguineti, V., & Burdet, E. (2014). Slaves no longer: Review on role assignment for human-robot joint motor action. Adaptive Behavior, 22, 70-82. |
| [82] | Jebari, K., & Lundborg, J. (2021). Artificial superintelligence and its limits: Why AlphaZero cannot become a general agent. AI & Society, 36, 807-815. |
| [83] | Jia, N., Luo, X., Fang, Z., & Liao, C. (2023). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5-32. |
| [84] | Jiang, L., Hoegg, J., Dahl, D. W., & Chattopadhyay, A. (2010). The persuasive role of incidental similarity on attitudes and purchase intentions in a sales context. Journal of Consumer Research, 36(5), 778-791. |
| [85] | Jiang, Z., & Hu, X. (2016). Knowledge sharing and life satisfaction: The roles of colleague relationships and gender. Social Indicators Research, 126(1), 379-394. |
| [86] | Jung, D., Dorner, V., Glaser, F., & Morana, S. (2018). Robo- advisory: Digitalization and automation of financial advisory. Business and Information Systems Engineering, 60(1), 81-86. |
| [87] | Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410. |
| [88] | Kelly, S., Kaye, S., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. |
| [89] | Kim, Y., Chin, C., Peng, S., Cai, H., & Tov, W. (2010). Explaining east-west differences in the likelihood of making favorable self-evaluations: The role of evaluation apprehension and directness of expression. Journal of Cross-Cultural Psychology, 41(1), 62-75. |
| [90] | Kinowska, H., & Sienkiewicz, L. J. (2023). Influence of algorithmic management practices on workplace well-being - Evidence from European organisations. Information Technology & People, 36(8), 21-42. |
| [91] |
Kish-Gephart, J. J., Harrison, D. A., & Trevino, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95(1), 1-31.
doi: 10.1037/a0017103 pmid: 20085404 |
| [92] | Knobe, J. (2003). Intentional action and side-effects in ordinary language. Analysis, 63, 190-194. |
| [93] | Kollmuss, A., & Agyeman, J. (2002). Mind the gap: Why do people act environmentally and what are the barriers to pro-environmental behavior? Environmental Education Research, 8(3), 239-260. |
| [94] | Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513. |
| [95] | Kormos, C., & Gifford, R. (2014). The validity of self-report measures of proenvironmental behavior: A meta-analytic review. Journal of Environmental Psychology, 40, 359-371. |
| [96] |
Koslov, K., Mendes, W. B., Pajtas, P. E., & Pizzagalli, D. A. (2011). Asymmetry in resting intracortical activity as a buffer to social threat. Psychological Science, 22(5), 641-649.
doi: 10.1177/0956797611403156 pmid: 21467550 |
| [97] | Kuchenbrandt, D., Eyssel, F., Bobinger, S., & Neufeld, M. (2013). When a robot’s group membership matters. International Journal of Social Robotics, 5(3), 409-417. |
| [98] | Ladak, A. (2024). What would qualify an artificial intelligence for moral standing? AI and Ethics, 4(2), 213-228. |
| [99] | Ladeira, W., Perin, M. G., & Santini, F. (2023). Acceptance of service robots: A meta-analysis in the hospitality and tourism industry. Journal of Hospitality Marketing & Management, 32(6), 694-716. |
| [100] | Lam, C. F., Wan, W. H., & Roussin, C. J. (2015). Going the extra mile and feeling energized: An enrichment perspective of organizational citizenship behaviors. Journal of Applied Psychology, 101(3), 379-391. |
| [101] | Lan, H., Tang, X., Ye, Y., & Zhang, H. (2024). Abstract or concrete? The effects of language style and service context on continuous usage intention for AI voice assistants. Humanities and Social Sciences Communications, 11, 99. |
| [102] | Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878. |
| [103] | Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217-234. |
| [104] | Lanz, L., Briker, R., & Gerpott, F. H. (2024). Employees adhere more to unethical instructions from human than AI supervisors: Complementing experimental evidence with machine learning. Journal of Business Ethics, 189, 625-646. |
| [105] | Larkin, C., Otten, C. D., & Arvai, J. (2021). Paging Dr. JARVIS! Will people accept advice from artificial intelligence for consequential risk management decisions? Journal of Risk Research, 25(4), 407-422. |
| [106] | Larrick, R. P., & Soll, J. B. (2006). Intuitions about combining opinions: Misappreciation of the avergaging principle. Management Science, 52(1), 111-127. |
| [107] | Leary, M. R. (1983). A brief version of the Fear of Negative Evaluation Scale. Personality and Social Psychology Bulletin, 9(3), 371-375. |
| [108] | Leary, M. R. (1995). Self-presentation: Impression management and interpersonal behavior. Madison, WI: Brown & Benchmark. |
| [109] | Leary, M. R., & Kowalski, R. M. (1990). Impression management: A literature review and two-component model. Psychological Bulletin, 107(1), 34-47. |
| [110] | Lecher, C. (2019). How Amazon automatically tracks and fires warehouse workers for ‘productivity’: Documents show how the company tracks and terminates workers. The Verge. Retrieved January 25, 2024, from https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations |
| [111] | Lee, J., Lee, D., & Lee, J. (2024). Influence of rapport and social presence with an AI psychotherapy chatbot on users’ self-disclosure. International Journal of Human-Computer Interaction, 40(7), 1620-1631. |
| [112] | Lee, S., Lee, N., & Sah, Y. J. (2020). Perceiving a mind in a chatbot: Effect of mind perception and social cues on co-presence, closeness and intention to use. International Journal of Human Computer Interaction, 36(10), 930-940. |
| [113] | Lee, Z., & Sargeant, A. (2011). Dealing with social desirability bias: An application to charitable giving. European Journal of Marketing, 45(5), 703-719. |
| [114] | Lefkowitz, J. (2006). The constancy of ethics amidst the changing world of work. Human Resource Management Review, 16(2), 245-268. |
| [115] | Lehdonvirta, V. (2018). Flexibility in the gig economy: Managing time on three online piecework platforms. New Technology, Work, and Employment, 33(1), 13-29. |
| [116] |
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for personal integrity. Journal of Business Ethics, 160, 377-392.
doi: 10.1007/s10551-019-04204-w pmid: 31814653 |
| [117] | Lemaignan, S., Fink, J., & Dillenbourg, P. (2014, March). The dynamics of anthropomorphism in robotics. Paper presented at the meeting of Proceedings of the 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). New York, United States. |
| [118] | Leo, X., & Huh, Y. (2020). Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Computers in Human Behavior, 113, 106520. |
| [119] | Li, X., & Sung, Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in User-AI assistant interactions. Computers in Human Behavior, 118, 106680. |
| [120] |
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
doi: 10.1093/jcr/ucz013 |
| [121] | Louie, T. A., & Obermiller, C. (2000). Gender stereotypes and social-desirability effects on charity donation. Psychology & Marketing, 17(2), 121-136. |
| [122] | Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Paper presented at the meeting of Proceedings of the 2015 10th ACM/IEEE International Conference on Human- Robot Interaction (HRI), Portland, Oregon, United States. |
| [123] | Malle., B. F., & Scheutz, M. (2017). Moral competence in social robots. In W., Wallach, & P., Asaro (Eds), Machine ethics and robot ethics. London: Routledge. |
| [124] | Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Report, 5, 100154. |
| [125] | Martin, B. A. S., Jin, H. S., Wang, D., Nguyen, H., Zhan, K., & Wang, Y. X. (2020). The influence of consumer anthropomorphism on attitudes towards artificial intelligence trip advisors. Journal of Hospitality and Tourism Management, 44, 108-111. |
| [126] | May, F., & Monga, A. (2014). When time has a will of its own, the powerless don’t have the will to wait: Anthropomorphism of time can decrease patience. Journal of Consumer Research, 40(5), 924-942. |
| [127] | McCroskey, J. C. (1977). Oral communication apprehension: A summary of recent theory and research. Human Communication Research, 4(1), 78-96. |
| [128] | Mell, J., Lucas, G., Mozgai, S., & Gratch, J. (2020). The effects of experience on deception in human-agent negotiation. Journal of Artificial Intelligence Research, 68, 633-660. |
| [129] | Millet, K., Buehler, F., Du, G., & Kokkoris, M. (2023). Defending humankind: Anthropocentric bias in the appreciation of AI art. Computers in Human Behavior, 143, 107707. |
| [130] | Möhlmann, M., Zalmanson, L., Henfridsson, O., & Gregory, R. W. (2021). Algorithmic management of work on online labor platforms: When matching meets control. MIS Quarterly, 45(4), 1999-2022. |
| [131] | Möslein,, F. (2018). Robots in the boardroom:Artificial intelligence and corporate law. In B.Woodrow., Eds.), Research Handbook on the Law of Artificial Intelligence (pp. 649-650). Cheltenham: Edward Elgar Publishing. |
| [132] | Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100. |
| [133] | Moritz, J. M., Pomrehn, L., Steinmetz, H., & Wehner, M. (2024, July). Reactions to algorithmic decision-making in human resource management:A meta-analysis. Paper presented at the meeting of Academy of Management Proceedings, Valhalla, New York, United Sates. |
| [134] | Munnukka, J., Talvitie-Lamberg, K., & Maity, D. (2022). Anthropomorphism and social presence in Human-Virtual service assistant interactions: The role of dialog length and attitudes. Computers in Human Behavior, 135, 107343. |
| [135] | Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. |
| [136] | Newman, D., Fast, N., & Harmon, D. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149-167. |
| [137] | Nicholls, J. G. (1984). Achievement motivation: Conceptions of ability, subjective experience, task choice, and performance. Psychological Review, 91(3), 328-346. |
| [138] | Niehoff, B. P., & Moorman, R. H. (1993). Justice as a mediator of the relationship between methods of monitoring and organizational citizenship behavior. Academy of Management Journal, 36(3), 527-556. |
| [139] | Niszczota, P., & Kaszás, D. (2020). Robo-investment aversion. PloS One, 15(9), e0239277. |
| [140] | Noval, L. J., & Stahl, G. K. (2017). Accounting for proscriptive and prescriptive morality in the workplace: The double-edged sword effect of mood on managerial ethical decision making. Journal of Business Ethics, 142(3), 589-602. |
| [141] | Oh, C., Song, J., Choi, J., Kim, S., Lee, S., & Suh, B. (2018, April). I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. Paper presented at the meeting of Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, Canada. |
| [142] | Paillé, P., & Boiral, O. (2013). Pro-environmental behavior at work: Construct validity and determinants. Journal of Environmental Psychology, 36, 118-128. |
| [143] | Parent-Rocheleau, X., & Parker, S. K. (2022). Algorithms as work designers: How algorithmic management influences the design of jobs. Human Resources Management Review, 32(3), 100838. |
| [144] | Parent-Rocheleau, X., Parker, S. K., Bujold, A., & Gaudet, M.-C. (2024). Creation of the algorithmic management questionnaire: A six-phase scale development process. Human Resource Management, 63, 25-44. |
| [145] | Park, H., Ahn, D., Hosanagar, K., & Lee, J. (2021). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. Paper presented at the meeting of Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. |
| [146] | Park, J., Woo, S. E., & Kim, J. (2024). Attitudes towards artificial intelligence at work: Scale development and validation. Journal of Occupational and Organizational Psychology. Advance online publication. |
| [147] | Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the machines: A critical consideration of automated leadership decision making in organizations. Group & Organization Management, 41, 571-594. |
| [148] | Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. |
| [149] | Pelletier, K. L., & Bligh, M. C. (2008). The aftermath of organizational corruption: Employee attributions and emotional reactions. Journal of Business Ethics, 80(4), 823-844. |
| [150] | Peng, A. C., & Kim, D. (2020). A meta-analytic test of the differential pathways linking ethical leadership to normative conduct. Journal of Organizational Behavior, 41(4), 348-368. |
| [151] | Pickard, M. D., & Roster, C. A. (2020). Using computer automated systems to conduct personal interviews: Does the mere presence of a human face inhibit disclosure? Computers in Human Behavior, 105, 106197. |
| [152] | Pitardi, V., Wirtz, J., Paluch, S., & Kunz, W. H. (2022). Service robots, agency and embarrassing service encounters. Journal of Service Management, 33(2), 389-414. |
| [153] |
Podsakoff, N. p., Whiting, S. W., Podsakoff, P. M., & Blume, B. D. (2009). Individual- and organizational-level consequences of organizational citizenship behaviors: A meta-analysis. Journal of Applied Psychology, 94(1), 122-141.
doi: 10.1037/a0013079 pmid: 19186900 |
| [154] | Qin, X., Lu, J. G., Chen, C., Zhou, X., Gan, Y., Li, W., & Song, L. L. (2024). Artificial intelligence quotient (AIQ). PsyArXiv Preprints, https://doi.org/10.31234/osf.io/qjm3r |
| [155] | Rader, C. A., Larrick, R. P., & Soll, J. B. (2017). Advice as a form of social influence: Informational motives and the consequences for accuracy. Social Personality Psychology Compass, 11, e12329. |
| [156] | Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192-210. |
| [157] | Raveendhran,, R., & Fast, N. J. (2019). Technology and social evaluation:Implications for individuals and organizations. In R. N.Landers (Ed.), The Cambridge handbook of technology and employee behavior (pp. 921-943). New York: Cambridge University Press. |
| [158] | Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of behavior tracking acceptance. Organizational Behavior and Human Decision Processes, 164, 11-26. |
| [159] | Raveendhran, R., Fast, N. J., & Carnevale, P. J. (2020). Virtual (freedom from) reality: Evaluation apprehension and leaders’ preference for communicating through avatars. Computers in Human Behavior, 111, 106415. |
| [160] | Rodell, J. B., Booth, J., Lynch, J., & Zipay, K. (2017). Corporate volunteering climate: Mobilizing employee passion for societal causes and inspiring future charitable action. Academy of Management Journal, 60(5), 1662-1681. |
| [161] | Roesler, E., Manzey, D., & Onnasch, L. (2021). A meta- analysis on the effectiveness of anthropomorphism in human-robot interaction. Science Robotics, 6(58), eabj5425. |
| [162] | Rosenberg, M. J. (1965). When dissonance fails: On eliminating evaluation apprehension from attitude measurement. Journal of Personality and Social Psychology, 1(1), 28-42. |
| [163] |
Ruttan, R. L., & Nordgren, L. F. (2021). Instrumental use erodes sacred values. Journal of Personality and Social Psychology, 121(6), 1223-1240.
doi: 10.1037/pspi0000343 pmid: 33475398 |
| [164] |
Ryan, R. M., & Connell, J. P. (1989). Perceived locus of causality and internalization: Examining reasons for acting in two domains. Journal of Personality and Social Psychology, 57(5), 749-761.
doi: 10.1037//0022-3514.57.5.749 pmid: 2810024 |
| [165] |
Schlenker, B. R., & Leary, M. R. (1982). Social anxiety and self-presentation: A conceptualization model. Psychological Bulletin, 92(3), 641-669.
doi: 10.1037/0033-2909.92.3.641 pmid: 7156261 |
| [166] | Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47(5), 609-612. |
| [167] | Sen, S., Du, S., & Bhattacharya, C. (2016). Corporate social responsibility: A consumer psychology perspective. Current Opinion in Psychology, 10, 70-75. |
| [168] | Sheeran, P. (2002). Intention-behavior relations: A conceptual and empirical review. European Review of Social Psychology, 12(1), 1-36. |
| [169] | Siemon, B. (2023). Let the computer evaluate your idea: Evaluation apprehension in human-computer collaboration. Behaviour & Information Technology, 42(5), 459-477. |
| [170] | Singh, S., Olson, E. D., & Tsai, C. H. K. (2021). Use of service robots in an event setting: Understanding the role of social presence, eeriness, and identity threat. Journal of Hospitality and Tourism Management, 49, 528-537. |
| [171] | Smith, E. R., Šabanović, S., & Fraune, M. R. (2021). Human- robot interaction through the lens of social psychological theories of intergroup behavior. Technology, Mind, Behavior, 1(2), 2. |
| [172] | Sniezek, J. A., & Buckley, T. (1995). Cueing and cognitive conflict in judge-advisor decision making. Organizational Behavior and Human Decision Processes, 62, 159-174. |
| [173] |
Sniezek, J. A., & van Swol, L. M. (2001). Trust, confidence, and expertise in a judge-advisor system. Organizational Behavior and Human Decision Processes, 84, 288-307.
pmid: 11277673 |
| [174] | Soll, J. B., & Larrick, R. P. (2009). Strategies for revising judgment: How (and how well) people use others’ opinions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 780-805. |
| [175] | Song, H., Barakova, E. I., Ham, J., & Markopoulos, P. (2024). The impact of social robots’ presence and roles on children’s performance in musical instrument practice. British Journal of Educational Technology, 55(3), 1041-1059. |
| [176] | Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women's math performance. Journal of Experimental Social Psychology, 35(1), 4-28. |
| [177] | Stanley, M. L., & Kay, A. C. (2024). The consequences of heroization for exploitation. Journal of Personality and Social Psychology: Attitudes and Social Cognition, 126(1), 5-25. |
| [178] | Stanley, M. L., Neck, C. B., & Neck, C. P. (2023a). Loyal workers are selectively and ironically targeted for exploitation. Journal of Experimental Social Psychology, 106, 104442. |
| [179] | Stanley, M. L., Neck, C. P., & Neck, C. B. (2023b). The dark side of generosity: Employees with a reputation for giving are selectively targeted for exploitation. Journal of Experimental Social Psychology, 108, 104503. |
| [180] | Stikvoort, B., Lindahl, T., & Daw, T. M. (2016). Thou shalt not sell nature: How taboo trade-offs can make us act pro-environmentally, to clear our conscience. Ecological Economics, 129, 252-259. |
| [181] | Sun, J., Liden, R. C., & Ouyang, L. (2019). Are servant leaders appreciated? An investigation of how relational attributions influence employee feelings of gratitude and prosocial behaviors. Journal of Organizational Behavior, 40(5), 528-540. |
| [182] | Tang, P. M., Koopman, J., Elfenbein, H. A., Zhang, J. H., de Cremer, D., Li, C. H., & Chan, E. T. (2022). Using robots at work during the COVID-19 crisis evokes passion decay: Evidence from field and experimental studies. Journal of Applied Psychology, 71(3), 881-911. |
| [183] |
Tang, P. M., Koopman, J., Mai, K. M., de Cremer, D., Zhang, J. H., Reynders, P., … Chen, I. H. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766-1789.
doi: 10.1037/apl0001103 pmid: 37307359 |
| [184] | Tang, P. M., Koopman, J., Yam, K. C., de Cremer, D., Zhang, J. H., & Reynders, P., (2023). The self-regulatory consequences of dependence on intelligent machines at work: Evidence from field and experimental studies. Human Resource Management, 62(5), 721-744. |
| [185] | Tangney, J. P. (1992). Situational determinants of shame and guilt in young adulthood. Personality and Social Psychology Bulletin, 18(2), 199-206. |
| [186] |
Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320-324.
pmid: 12860191 |
| [187] | Tomprou, M., & Lee, M. K. (2022). Employment relationships in algorithmic management: A psychological contract perspective. Computers in Human Behavior, 126, 106997. |
| [188] | Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9), 1600-1631. |
| [189] | Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32(6), 951-990. |
| [190] | Tsai, C., Marshall, J. D., Choudhury, A., Serban, A., Hou, Y. T., Jung, M. F., … Yammarino, F. J. (2022). Human-robot collaboration: A multilevel and integrated leadership framework. The Leadership Quarterly, 33, 101594. |
| [191] | Uysal, E., Alavi, S., & Bezençon, V. (2023). Anthropomorphism in artificial intelligence: A review of empirical work across domains and insights for future research. Artificial Intelligence in Marketing, 20, 273-308. |
| [192] | van Beurden, J., van de Voorde, K., & van Veldhoven, M. (2021). The employee perspective on HR practices: A systematic literature review, integration and outlook. The International Journal of Human Resource Management, 32(2), 359-393. |
| [193] | van Boven, L., Loewenstein, G., & Dunning, D. (2005). The illusion of courage in social predictions: Underestimating the impact of fear of embarrassment on other people. Organizational Behavior and Human Decision Processes, 96, 130-141. |
| [194] | van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., & Petersen, J. A. (2017). Domo Arigato, Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences. Journal of Service Research, 20(1), 43-58. |
| [195] | van Zoonen, W., Sivunen, A. E., & Treem, J. W. (2024). Algorithmic management of crowdworkers: Implications for workers’ identity, belonging, and meaningfulness of work. Computers in Human Behavior, 152, 108089. |
| [196] | Veetikazhi, R., Kamalanabhan, T. J., Malhotra, P., Arora, R., & Mueller, A. (2022). Unethical employee behaviour: A review and typology. The International Journal of Human Resource Management, 33(10), 1976-2018. |
| [197] | von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries, 4(4), 404-409. |
| [198] | Wang, Y., & Chuang, Y. (2024). Artificial intelligence self-efficacy: Scale development and validation. Education and Information Technologies, 29, 4785-4808. |
| [199] |
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063-1070.
doi: 10.1037//0022-3514.54.6.1063 pmid: 3397865 |
| [200] |
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232.
doi: 10.1177/1745691610369336 pmid: 24839457 |
| [201] | Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117. |
| [202] | Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people’s conceptions of mental life. Proceedings of the National Academy of Sciences, 114(43), 11374-11379. |
| [203] |
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101, 197-209.
doi: 10.1016/j.chb.2019.07.027 |
| [204] | Wieland, B., de Wit, J., & de Rooij, A. (2022). Electronic brainstorming with a Chatbot partner: A good idea due to increased productivity and idea diversity. Frontiers in Artificial Intelligence, 5, 880673. |
| [205] | Wu, M., Wang, N., & Yuen, K. (2023). Deep versus superficial anthropomorphism: Exploring their effects on human trust in shared autonomous vehicles. Computers in Human Behavior, 141, 107614. |
| [206] | Xu, L., & Yu, F. (2020). Factors that influence robot acceptance. Chinese Science Bulletin, 65(6), 496-510. |
| [许丽颖, 喻丰. (2020). 机器人接受度的影响因素. 科学通报, 65(6), 496-510.] | |
| [207] | Xu, L., Mehta, R., & Dahl, D. W. (2021). Leveraging creativity in charity marketing: The impact of engaging in creative activities on subsequent donation behavior. Journal of Marketing, 86(5), 79-94. |
| [208] |
Xu, L., Yu, F., & Peng, K. (2022). Algorithmic discrimination causes less desire for moral punishment than human discrimination. Acta Psychologica Sinica, 54(9), 1076-1092.
doi: 10.3724/SP.J.1041.2022.01076 |
|
[许丽颖, 喻丰, 彭凯平. (2022). 算法歧视比人类歧视引起更少道德惩罚欲. 心理学报, 54(9), 1076-1092.]
doi: 10.3724/SP.J.1041.2022.01076 |
|
| [209] | Xu, L., Yu, F., Wu, J., Han, T., & Zhao, L. (2017). Anthropomorphism: Antecedents and consequences. Advances in Psychological Science, 25(11), 1942-1954. |
|
[许丽颖, 喻丰, 邬家骅, 韩婷婷, 赵靓. (2017). 拟人化: 从“它”到“他”. 心理科学进展, 25(11), 1942-1954.]
doi: 10.3724/SP.J.1042.2017.01942 |
|
| [210] | Yam, K., Bigman, Y. E., Tang, P., Ilies, R., de Cremer, D., & Soh, H. (2021). Robots at work: People prefer- and forgive- service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557-1572. |
| [211] | Yaniv, I. (2004). Receiving other people’s advice: Influence and benefit. Organizational Behavior and Human Decision Processes, 93, 1-13. |
| [212] |
Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making: Egocentric discounting and reputation formation. Organizational Behavior and Human Decision Processes, 83, 260-281.
pmid: 11056071 |
| [213] | Yaniv, I., & Milyavsky, M. (2007). Using advice from multiple sources to revise and improve judgment. Organizational Behavior and Human Decision Processes, 103, 104-120. |
| [214] | Yaniv, I., Choshen-Hillel, S., & Milyavsky, M. (2011). Receiving advice on matters of taste: Similarity, majority influence, and taste discrimination. Organizational Behavior and Human Decision Processes, 115, 111-120. |
| [215] |
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403-414.
doi: 10.1002/bdm.2118 |
| [216] | Young, A. D., & Monroe, A. E. (2019). Autonomous morals: Inferences of mind predict acceptance of AI behavior in sacrificial moral dilemmas. Journal of Experimental Social Psychology, 85, 103870. |
| [217] | Yu, F., & Xu, L. (2018). How to make an ethical artificial intelligence? Answer from a psychological perspective. Global Media Journal, 5(4), 24-42. |
| [喻丰, 许丽颖. (2018). 如何做出道德的人工智能体——心理学的视角. 全球传媒学刊. 5(4), 24-42.] | |
| [218] | Yu, H., Miao, C., Chen, Y., Fauvel, S., Li, X., & Lesser, V. (2017). Algorithmic management for improving collective productivity in crowdsourcing. Scientific Reports, 7(1), 12541. |
| [219] | Yukl, G. (2013). Leadership in organizations (8. ed.). Edinburgh Gate: Pearson Education. |
| [220] |
Zhao, Y., Xu, L., Yu, F., & Jin, W. (2024). Perceived opacity leads to algorithm aversion in the workplace. Acta Psychologica Sinica, 56(4), 497-514.
doi: 10.3724/SP.J.1041.2024.00497 |
|
[赵一骏, 许丽颖, 喻丰, 金旺龙. (2024). 感知不透明性增加职场中的算法厌恶. 心理学报, 56(4), 497-514.]
doi: 10.3724/SP.J.1041.2024.00497 |
|
| [221] |
Zhou, X., Zhai, H., Delidabieke, B., Zeng, H., Cui, Y., & Cao, X. (2019). Exposure to ideas, evaluation apprehension, and incubation intervals in collaborative idea generation. Frontiers in Psychology, 10, Article 1459.
doi: 10.3389/fpsyg.2019.01459 pmid: 31333531 |
| [222] | Zhou, Y., Fei, Z., He, Y., & Yang, Z. (2022). How human- chatbot interaction impairs charitable giving: The role of moral judgment. Journal of Business Ethics, 178, 849-865. |
| [1] | 胡小勇, 李穆峰, 李悦, 李凯, 喻丰. 人工智能决策的道德缺失效应及其机制与应对策略[J]. 心理学报, 2026, 58(1): 74-95. |
| [2] | 冯文婷, 薛舒允, 汪涛. 拟人的品牌更环保?拟人化沟通对促进绿色消费倾向的影响[J]. 心理学报, 2025, 57(4): 720-738. |
| [3] | 赵一骏, 许丽颖, 喻丰, 金旺龙. 感知不透明性增加职场中的算法厌恶[J]. 心理学报, 2024, 56(4): 497-514. |
| [4] | 许丽颖, 喻丰, 彭凯平. 算法歧视比人类歧视引起更少道德惩罚欲[J]. 心理学报, 2022, 54(9): 1076-1092. |
| [5] | 龚少英, 上官晨雨, 翟奎虎, 郭雅薇. 情绪设计对多媒体学习的影响[J]. 心理学报, 2017, 49(6): 771-782. |
| [6] | 刘笛;王海忠. 基于人性本真性的拟人化广告的负面情绪与态度 ——愧疚感的中介作用[J]. 心理学报, 2017, 49(1): 128-137. |
| [7] | 冯文婷;汪涛;魏华;周南. 孤独让我爱上你:产品陈列对孤独个体产品偏好的影响[J]. 心理学报, 2016, 48(4): 398-409. |
| [8] | 汪涛;谢志鹏;崔楠. 和品牌聊聊天 —— 拟人化沟通对消费者品牌态度影响[J]. 心理学报, 2014, 46(7): 987-999. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||