心理学报 ›› 2026, Vol. 58 ›› Issue (1): 74-95.doi: 10.3724/SP.J.1041.2026.0074 cstr: 32110.14.2026.0074
收稿日期:2025-04-06
发布日期:2025-10-28
出版日期:2026-01-25
通讯作者:
喻丰, Email: psychpedia@whu.edu.cn基金资助:
HU Xiaoyong1, LI Mufeng2, LI Yue1, LI Kai1, YU Feng1(
)
Received:2025-04-06
Online:2025-10-28
Published:2026-01-25
摘要:
随着人工智能在重大决策中的作用日益凸显, 其引发的道德问题也备受关注。本研究通过整合心智感知理论与道德二元论, 系统揭示了人工智能道德缺失效应的双路径机制及应对策略。研究发现, 人们对人工智能不道德决策的道德反应水平显著弱于人类决策者; 与人类决策者相比, 人们感知到人工智能较低的能动性和体验性是导致人工智能决策道德缺失效应的原因; 对人工智能进行干预的拟人化策略以及对人类进行干预的期望调整策略组合成的综合干预方案能显著提升人们对人工智能的道德反应水平。与其他学科侧重从设计层面探讨公平算法的原则与方法不同, 本研究基于心理学视角, 关注人们在人工智能与人类决策中的心理反应差异。此视角不仅为应对算法偏见引发的社会问题和构建公平算法提供了新的思路, 也为“算法伦理”研究拓展了理论边界。
中图分类号:
胡小勇, 李穆峰, 李悦, 李凯, 喻丰. (2026). 人工智能决策的道德缺失效应及其机制与应对策略. 心理学报, 58(1), 74-95.
HU Xiaoyong, LI Mufeng, LI Yue, LI Kai, YU Feng. (2026). Moral deficiency in AI decision-making: Underlying mechanisms and mitigation strategies. Acta Psychologica Sinica, 58(1), 74-95.
| 因变量 | 歧视情境 | 决策主体 | M | SD |
|---|---|---|---|---|
| 道德反应 | 学历 | AI | 4.14 | 1.38 |
| 人类 | 5.05 | 1.25 | ||
| 年龄 | AI | 4.39 | 1.39 | |
| 人类 | 5.30 | 1.06 | ||
| 性别 | AI | 4.49 | 1.48 | |
| 人类 | 5.28 | 1.23 | ||
| 道德认知 | 年龄 | AI | 4.73 | 1.27 |
| 人类 | 5.18 | 0.99 | ||
| 性别 | AI | 4.78 | 1.32 | |
| 人类 | 5.22 | 1.10 | ||
| 学历 | AI | 4.53 | 1.28 | |
| 人类 | 5.14 | 1.21 | ||
| 道德情绪 | 年龄 | AI | 4.26 | 1.60 |
| 人类 | 5.44 | 1.27 | ||
| 性别 | AI | 4.44 | 1.70 | |
| 人类 | 5.39 | 1.45 | ||
| 学历 | AI | 4.02 | 1.51 | |
| 人类 | 5.13 | 1.39 | ||
| 道德行为 | 年龄 | AI | 4.08 | 1.66 |
| 人类 | 5.31 | 1.29 | ||
| 性别 | AI | 4.18 | 1.77 | |
| 人类 | 5.28 | 1.45 | ||
| 学历 | AI | 3.75 | 1.70 | |
| 人类 | 4.86 | 1.51 |
表1 不同行为主体与歧视情境下道德反应及各维度得分的描述统计
| 因变量 | 歧视情境 | 决策主体 | M | SD |
|---|---|---|---|---|
| 道德反应 | 学历 | AI | 4.14 | 1.38 |
| 人类 | 5.05 | 1.25 | ||
| 年龄 | AI | 4.39 | 1.39 | |
| 人类 | 5.30 | 1.06 | ||
| 性别 | AI | 4.49 | 1.48 | |
| 人类 | 5.28 | 1.23 | ||
| 道德认知 | 年龄 | AI | 4.73 | 1.27 |
| 人类 | 5.18 | 0.99 | ||
| 性别 | AI | 4.78 | 1.32 | |
| 人类 | 5.22 | 1.10 | ||
| 学历 | AI | 4.53 | 1.28 | |
| 人类 | 5.14 | 1.21 | ||
| 道德情绪 | 年龄 | AI | 4.26 | 1.60 |
| 人类 | 5.44 | 1.27 | ||
| 性别 | AI | 4.44 | 1.70 | |
| 人类 | 5.39 | 1.45 | ||
| 学历 | AI | 4.02 | 1.51 | |
| 人类 | 5.13 | 1.39 | ||
| 道德行为 | 年龄 | AI | 4.08 | 1.66 |
| 人类 | 5.31 | 1.29 | ||
| 性别 | AI | 4.18 | 1.77 | |
| 人类 | 5.28 | 1.45 | ||
| 学历 | AI | 3.75 | 1.70 | |
| 人类 | 4.86 | 1.51 |
| 模型 | χ² | df | χ2/df | CFI | TLI | RMSEA | SRMR |
|---|---|---|---|---|---|---|---|
| 单因子模型 | 2749.91 | 189 | 14.55 | 0.70 | 0.67 | 0.19 | 0.14 |
| 二因子模型 | 1030.31 | 188 | 5.49 | 0.90 | 0.89 | 0.11 | 0.04 |
| 五因子模型 | 519.87 | 179 | 2.90 | 0.96 | 0.95 | 0.07 | 0.03 |
表2 验证性因子分析结果
| 模型 | χ² | df | χ2/df | CFI | TLI | RMSEA | SRMR |
|---|---|---|---|---|---|---|---|
| 单因子模型 | 2749.91 | 189 | 14.55 | 0.70 | 0.67 | 0.19 | 0.14 |
| 二因子模型 | 1030.31 | 188 | 5.49 | 0.90 | 0.89 | 0.11 | 0.04 |
| 五因子模型 | 519.87 | 179 | 2.90 | 0.96 | 0.95 | 0.07 | 0.03 |
| 变量 | M | SD | 1 | 2 | 3 | 4 |
|---|---|---|---|---|---|---|
| 1决策主体 | 0.51 | 0.50 | 1 | |||
| 2感知能动性 | 5.16 | 1.76 | 0.75*** | 1 | ||
| 3感知体验性 | 4.95 | 1.93 | 0.80*** | 0.83*** | 1 | |
| 4道德反应 | 5.38 | 1.31 | 0.42*** | 0.48*** | 0.47*** | 1 |
表3 各变量的描述统计分析
| 变量 | M | SD | 1 | 2 | 3 | 4 |
|---|---|---|---|---|---|---|
| 1决策主体 | 0.51 | 0.50 | 1 | |||
| 2感知能动性 | 5.16 | 1.76 | 0.75*** | 1 | ||
| 3感知体验性 | 4.95 | 1.93 | 0.80*** | 0.83*** | 1 | |
| 4道德反应 | 5.38 | 1.31 | 0.42*** | 0.48*** | 0.47*** | 1 |
| [1] | Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing |
| [2] |
Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., … Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134-143.
doi: 10.1038/s41562-019-0762-8 pmid: 31659321 |
| [3] |
Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2022). Consumer-lending discrimination in the FinTech Era. Journal of Financial Economics, 143(1), 30-56.
doi: 10.1016/j.jfineco.2021.05.047 URL |
| [4] |
Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30(2), 195-218.
doi: 10.1007/s11023-020-09525-8 |
| [5] |
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
doi: S0010-0277(18)30208-7 pmid: 30107256 |
| [6] |
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365-368.
doi: S1364-6613(19)30063-4 pmid: 30962074 |
| [7] |
Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4-27.
doi: 10.1037/xge0001250 URL |
| [8] |
Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of Experimental Psychology: Applied, 27(2), 447-459.
doi: 10.1037/xap0000294 URL |
| [9] |
Burgoon, J. K., Newton, D. A., Walther, J. B., & Baesler, E. J. (1989). Nonverbal expectancy violations and conversational involvement. Journal of Nonverbal Behavior, 13(2), 97-119.
doi: 10.1007/BF00990793 URL |
| [10] |
Chakroff, A., & Young, L. (2015). Harmful situations, impure people: An attribution asymmetry across moral domains. Cognition, 136, 30-37.
doi: 10.1016/j.cognition.2014.11.034 pmid: 25490126 |
| [11] |
Chang, X. (2023). Gender bias in hiring: An analysis of the impact of Amazon’s recruiting algorithm. Advances in Economics, Management and Political Sciences, 23(1), 134-140.
doi: 10.54254/2754-1169/23/20230367 URL |
| [12] | Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge. |
| [13] |
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot: Anthropomorphism and anger in customer-chatbot interactions. Journal of Marketing, 86(1), 132-148.
doi: 10.1177/00222429211045687 URL |
| [14] |
Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 4-19.
doi: 10.1016/j.jesp.2015.07.006 URL |
| [15] |
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299-309.
doi: 10.1007/s10676-016-9403-3 URL |
| [16] | Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In K. Martin (Ed), Ethics of data and analytics (pp. 296-299). Auerbach Publications. |
| [17] |
de Vel-Palumbo, M., Ferguson, R., Schein, C., Chang, M. X.-L., & Bastian, B. (2022). Morally excused but socially excluded: Denying agency through the defense of mental impairment. PLoS ONE, 17(7), e0272061.
doi: 10.1371/journal.pone.0272061 URL |
| [18] |
Decety, J., & Cowell, J. M. (2018). Interpersonal harm aversion as a necessary foundation for morality: A developmental neuroscience perspective. Development and Psychopathology, 30(1), 153-164.
doi: 10.1017/S0954579417000530 pmid: 28420449 |
| [19] | Douglas, B. D., Ewell, P. J., & Brauer, M. (2023). Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. Plos One, 18(3), e0279720. |
| [20] |
Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3-4), 177-190.
doi: 10.1016/S0921-8890(02)00374-3 URL |
| [21] |
Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191.
doi: 10.3758/bf03193146 pmid: 17695343 |
| [22] |
Ge, X. (2023). Experimentally manipulating mediating processes: Why and how to examine mediation using statistical moderation analyses. Journal of Experimental Social Psychology, 109, 104507.
doi: 10.1016/j.jesp.2023.104507 URL |
| [23] |
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
pmid: 17272713 |
| [24] |
Gray, K., Waytz, A., & Young, L. (2012). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23(3), 206-215.
doi: 10.1080/1047840X.2012.686247 URL |
| [25] |
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
doi: 10.1016/j.cognition.2012.06.007 pmid: 22784682 |
| [26] |
Grazzini, L., Viglia, G., & Nunan, D. (2023). Dashed expectations in service experiences. Effects of robots human-likeness on customers’ responses. European Journal of Marketing, 57(4), 957-986.
doi: 10.1108/EJM-03-2021-0220 URL |
| [27] | Griffith, R. L., & Peterson, M. H. (Eds.). (2006). A closer examination of applicant faking behavior. IAP. |
| [28] |
Grimes, G. M., Schuetzler, R. M., & Giboney, J. S. (2021). Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144, 113515.
doi: 10.1016/j.dss.2021.113515 URL |
| [29] | Guidi, S., Marchigiani, E., Roncato, S., & Parlangeli, O. (2021). Human beings and robots: Are there any differences in the attribution of punishments for the same crimes? Behaviour & Information Technology, 40(5), 445-453. |
| [30] |
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169.
doi: 10.1016/j.ijinfomgt.2019.03.008 URL |
| [31] |
Hauser, D. J., Ellsworth, P. C., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in Psychology, 9, 998.
doi: 10.3389/fpsyg.2018.00998 pmid: 29977213 |
| [32] | Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. The Guilford Press. |
| [33] |
Heilman, M. E., Caleo, S., & Manzi, F. (2024). Women at work: Pathways from gender stereotypes to gender bias and discrimination. Annual Review of Organizational Psychology and Organizational Behavior, 11(1), 165-192.
doi: 10.1146/orgpsych.2024.11.issue-1 URL |
| [34] | Heinrichs, B., Heinrichs, J.-H., & Rüther, M. (2022). Künstliche Intelligenz. De Gruyter. |
| [35] |
Hohenstein, J., & Jung, M. (2020). AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior, 106, 106190.
doi: 10.1016/j.chb.2019.106190 URL |
| [36] |
Hong, J. W., Cruz, I., & Williams, D. (2021). AI, you can drive my car: How we evaluate human drivers vs. self-driving cars. Computers in Human Behavior, 125, 106944.
doi: 10.1016/j.chb.2021.106944 URL |
| [37] |
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55.
doi: 10.1080/10705519909540118 URL |
| [38] | Hu, X., Li, M., Wang, D., & Yu, F. (2024). Reactions to immoral AI decisions: The moral deficit effect and its underlying mechanism. Chinese Science Bulletin, 69(11), 1406-1416. |
| [胡小勇, 李穆峰, 王笛新, 喻丰. (2024). 人工智能决策的道德缺失效应及其机制. 科学通报, 69(11), 1406-1416.] | |
| [39] |
Jackson, D. L. (2003). Revisiting sample size and number of parameter estimates: Some support for the N: Q hypothesis. Structural Equation Modeling: A Multidisciplinary Journal, 10(1), 128-141.
doi: 10.1207/S15328007SEM1001_6 URL |
| [40] | Kamide,, H., Eyssel, F., & Arai, T. (2013). Psychological anthropomorphism of robots:Measuring mind perception and humanity in Japanese context. In G. Herrmann, M. Pearson, A. Lenz, P. Bremner, A. Spiers, & U. Leonards (Eds.), Social robotics (pp. 199-208). Springer. |
| [41] |
Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155-163.
doi: 10.1016/j.jcm.2016.02.012 pmid: 27330520 |
| [42] |
Laakasuo, M., Palomäki, J., & Köbis, N. (2021). Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. International Journal of Social Robotics, 13(7), 1679-1688.
doi: 10.1007/s12369-020-00738-6 |
| [43] |
Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878.
doi: 10.1016/j.chb.2021.106878 URL |
| [44] |
Leo, X., & Huh, Y. E. (2020). Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Computers in Human Behavior, 113, 106520.
doi: 10.1016/j.chb.2020.106520 URL |
| [45] |
Lew, Z., & Walther, J. B. (2023). Social scripts and expectancy violations: Evaluating communication with human or AI chatbot interactants. Media Psychology, 26(1), 1-16.
doi: 10.1080/15213269.2022.2084111 URL |
| [46] |
Leyens, S. (2000). La conscience imaginée: Sur l’éliminativisme de Daniel Dennett. Revue philosophique de Louvain, 98(4), 761-782.
doi: 10.2143/RPL.98.4.542009 URL |
| [47] | Lima, G., Kim, C., Ryu, S., Jeon, C., & Cha, M. (2020). Collecting the Public Perception of AI and Robot Rights. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-24. |
| [48] | Lin, H., Chi, O. H., & Gursoy, D. (2020). Antecedents of customers' acceptance of artificially intelligent robotic device use in hospitality services. Journal of Hospitality Marketing & Management, 29(5), 530-549. |
| [49] | Lin, M., Cui, X., Wang, J., Wu, G., & Lin, J. (2022). Promotors or inhibitors? Role of task type on the effect of humanoid service robots on consumers' use intention. Journal of Hospitality Marketing & Management, 31(6), 710-729. |
| [50] | Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data. John Wiley & Sons. |
| [51] |
Liu, P., Yang, R., & Xu, Z. (2019). How safe is safe enough for self-driving vehicles? Risk Analysis, 39(2), 315-325.
doi: 10.1111/risa.13116 pmid: 29783277 |
| [52] |
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382-386.
pmid: 3640358 |
| [53] | Magrani, E. (2019). New perspectives on ethics and the laws of artificial intelligence. Internet Policy Review, 8(3), 1-19. https://doi.org/10.14763/2019.3.1420 |
| [54] | Malle, B. F. (2019). How many dimensions of mind perception really are there? In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st annual meeting of the cognitive science society (pp. 2268-2274). Cognitive Science Society. |
| [55] |
Mancosu, M., Ladini, R., & Vezzoni, C. (2019). ‘Short is better’. Evaluating the attentiveness of online respondents through screener questions in a real survey environment. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 141(1), 30-45. https://doi.org/10.1177/0759106318812788
doi: 10.1177/0759106318812788 URL |
| [56] |
Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5, 100154.
doi: 10.1016/j.chbr.2021.100154 URL |
| [57] |
Melián-González, S., Gutierrez-Tano, D., & Bulchand- Gidumal, J. (2021). Predicting the intentions to use chatbots for travel and tourism. Current Issues in Tourism, 24(2), 192-210.
doi: 10.1080/13683500.2019.1706457 URL |
| [58] |
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
doi: 10.1111/josi.2000.56.issue-1 URL |
| [59] |
Nijssen, S. R. R., Müller, B. C. N., Bosse, T., & Paulus, M. (2023). Can you count on a calculator? The role of agency and affect in judgments of robots as moral agents. Human-Computer Interaction, 38(5-6), 400-416.
doi: 10.1080/07370024.2022.2080552 URL |
| [60] |
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
doi: 10.1126/science.aax2342 pmid: 31649194 |
| [61] |
Pavone, G., Meyer-Waarden, L., & Munzel, A. (2023). Rage against the machine: Experimental insights into customers’ negative emotional responses, attributions of responsibility, and coping strategies in artificial intelligence-based service failures. Journal of Interactive Marketing, 58(1), 52-71.
doi: 10.1177/10949968221134492 URL |
| [62] |
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497.
doi: 10.1002/(ISSN)1098-240X URL |
| [63] |
Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459-467.
doi: 10.1002/nur.v30:4 URL |
| [64] |
Qian, Y., & Wan, X. (2024). Influence of robot anthropomorphism on consumer attitudes toward restaurants and service providers. International Journal of Hospitality Management, 123, 103939.
doi: 10.1016/j.ijhm.2024.103939 URL |
| [65] | Rai, A., Constantinides, P., & Sarker, S. (2019). Next generation digital platforms: Toward human-AI hybrids. Management Information Systems Quarterly, 43(s1), iii-ix. |
| [66] |
Schniter, E., Shields, T. W., & Sznycer, D. (2020). Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology, 78, 102253.
doi: 10.1016/j.joep.2020.102253 URL |
| [67] | Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648-663. |
| [68] |
Shank, D. B., North, M., Arnold, C., & Gamez, P. (2021). Can mind perception explain virtuous character judgments of artificial intelligence? Technology, Mind, and Behavior, 2(3), 1-13. https://doi.org/10.1037/tmb0000047
doi: 10.1142/S2339547814300017 URL |
| [69] | Song, F., & Yeung, S. H. F. (2024). A pluralist hybrid model for moral AIs. AI & Society, 39(3), 891-900. |
| [70] |
Srinivasan, R., & Sarial-Abi, G. (2021). When algorithms fail: Consumers’ responses to brand harm crises caused by algorithm errors. Journal of Marketing, 85(5), 74-91.
doi: 10.1177/0022242921997082 URL |
| [71] |
Sullivan, Y. W., & Fosso Wamba, S. (2022). Moral judgments in the age of artificial intelligence. Journal of Business Ethics, 178(4), 917-943.
doi: 10.1007/s10551-022-05053-w |
| [72] | Tang, D. D., & Wen, Z. L. (2020). Statistical approaches for testing common method bias: Problems and suggestions. Journal of Psychological Science, 43(1), 215-223. |
| [汤丹丹, 温忠麟. (2020). 共同方法偏差检验:问题与建议. 心理科学, 43(1), 215-223.] | |
| [73] | Wang, X., Wu, Y. C., Ji, X., & Fu, H. (2024). Algorithmic discrimination: Examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 7, 1320277. |
| [74] |
Waytz, A., Epley, N., & Cacioppo, J. T. (2010). Social cognition unbound: Insights into anthropomorphism and dehumanization. Current Directions in Psychological Science, 19(1), 58-62.
pmid: 24839358 |
| [75] |
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.
doi: 10.1016/j.jesp.2014.01.005 URL |
| [76] |
Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people’s conceptions of mental life. Proceedings of the National Academy of Sciences of the United States of America, 114(43), 11374-11379.
doi: 10.1073/pnas.1704347114 pmid: 29073059 |
| [77] |
Wen, Z. L., & Ye, B. J. (2014). Analyses of mediating effects: The development of methods and models. Advances in Psychological Science, 22(5), 731-745.
doi: 10.3724/SP.J.1042.2014.00731 |
|
[温忠麟, 叶宝娟. (2014). 中介效应分析: 方法和模型发展. 心理科学进展, 22(5), 731-745.]
doi: 10.3724/SP.J.1042.2014.00731 |
|
| [78] |
Wilson, A., Stefanik, C., & Shank, D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations? Computers in Human Behavior Reports, 8, 100229.
doi: 10.1016/j.chbr.2022.100229 URL |
| [79] |
Xiao, H., Marie, A., & Strickland, B. (2024). Moral commitment to gender equality increases (mis)perceptions of gender bias in hiring. European Journal of Social Psychology, 54(6), 1211-1227.
doi: 10.1002/ejsp.v54.6 URL |
| [80] |
Xu, L., Yu, F., & Peng, K. (2022). Algorithmic discrimination causes less desire for moral punishment than human discrimination. Acta Psychologica Sinica, 54(9), 1076-1092.
doi: 10.3724/SP.J.1041.2022.01076 |
|
[许丽颖, 喻丰, 彭凯平. (2022). 算法歧视比人类歧视引起更少道德惩罚欲. 心理学报, 54(9), 1076-1092.]
doi: 10.3724/SP.J.1041.2022.01076 |
|
| [81] |
Zhang, S., Lin, X., Li, X., & Ren, A. (2022). Service robots’ anthropomorphism: Dimensions, factors and internal relationships. Electronic Markets, 32(1), 277-295.
doi: 10.1007/s12525-022-00527-1 |
| [82] |
Zhu, Y., & Chu, J. (2025). Should we express gratitude in human-AI interaction: The online public's moral stance toward artificial intelligence assistants in China. Public Understanding of Science, 34(6), 717-733.
doi: 10.1177/09636625251314337 URL |
| [1] | 李凯, 许丽颖, 刘采梦, 喻丰. 技术接受的贫富分化?主观社会阶层影响个体对人工智能的态度[J]. 心理学报, 2026, 58(2): 235-246. |
| [2] | 章彦博, 黄峰, 莫柳铃, 刘晓倩, 朱廷劭. 基于大语言模型的自杀意念文本数据增强与识别技术[J]. 心理学报, 2025, 57(6): 987-1000. |
| [3] | 冯文婷, 薛舒允, 汪涛. 拟人的品牌更环保?拟人化沟通对促进绿色消费倾向的影响[J]. 心理学报, 2025, 57(4): 720-738. |
| [4] | 王国轩, 龙立荣, 李绍龙, 孙芳, 望家晴, 黄世英子. 负面绩效反馈下员工绩效改进动机的人机比较[J]. 心理学报, 2025, 57(2): 298-314. |
| [5] | 蒋多, 罗振旺, 黄伟淇, 罗南宝, 陈雅文. 金钱与道德情境中对智能机器人的工作分配接受度:基于独立与协同模式[J]. 心理学报, 2025, 57(12): 2177-2201. |
| [6] | 黄峰, 丁慧敏, 李思嘉, 韩诺, 狄雅政, 刘晓倩, 赵楠, 李林妍, 朱廷劭. 基于大语言模型的自助式AI心理咨询系统构建及其效果评估[J]. 心理学报, 2025, 57(11): 2022-2042. |
| [7] | 魏心妮, 喻丰, 彭凯平. 低可持续性降低人工智能的接受意愿[J]. 心理学报, 2025, 57(11): 1973-1987. |
| [8] | 李斌, 芮建禧, 俞炜楠, 李爱梅, 叶茂林. 当设计遇见AI:人工智能设计产品对消费者响应模式的影响[J]. 心理学报, 2025, 57(11): 1914-1932. |
| [9] | 周详, 白博仁, 张婧婧, 刘善柔. 和而不同:生成式人工智能凸显下人类的社会创造策略[J]. 心理学报, 2025, 57(11): 1901-1913. |
| [10] | 吴胜涛, 彭凯平. 智能时代的人类优势与心理变革(代序)[J]. 心理学报, 2025, 57(11): 1879-1884. |
| [11] | 许丽颖, 赵一骏, 喻丰. 人工智能主管提出的道德行为建议更少被遵从[J]. 心理学报, 2025, 57(11): 2060-2082. |
| [12] | 赵一骏, 许丽颖, 喻丰, 金旺龙. 感知不透明性增加职场中的算法厌恶[J]. 心理学报, 2024, 56(4): 497-514. |
| [13] | 王晨, 陈为聪, 黄亮, 侯苏豫, 王益文. 机器人遵从伦理促进人机信任?决策类型反转效应与人机投射假说[J]. 心理学报, 2024, 56(2): 194-209. |
| [14] | 许丽颖, 喻丰, 彭凯平. 算法歧视比人类歧视引起更少道德惩罚欲[J]. 心理学报, 2022, 54(9): 1076-1092. |
| [15] | 龚少英, 上官晨雨, 翟奎虎, 郭雅薇. 情绪设计对多媒体学习的影响[J]. 心理学报, 2017, 49(6): 771-782. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||