Acta Psychologica Sinica ›› 2026, Vol. 58 ›› Issue (1): 74-95.doi: 10.3724/SP.J.1041.2026.0074
• Reports of Empirical Studies • Previous Articles Next Articles
HU Xiaoyong1, LI Mufeng2, LI Yue1, LI Kai1, YU Feng1(
)
Received:2025-04-06
Published:2026-01-25
Online:2025-10-28
Contact:
YU Feng, Email: psychpedia@whu.edu.cn
About author:The original article is in Chinese. The Chinese version shall always prevail in case of any discrepancy or inconsistency between the Chinese version and its English translation.
HU Xiaoyong, LI Mufeng, LI Yue, LI Kai, YU Feng. (2026). Moral deficiency in AI decision-making: Underlying mechanisms and mitigation strategies. Acta Psychologica Sinica, 58(1), 74-95.
Add to citation manager EndNote|Ris|BibTeX
URL: https://journal.psych.ac.cn/acps/EN/10.3724/SP.J.1041.2026.0074
| Variable | Discrimination | Agent | M | SD |
|---|---|---|---|---|
| Moral response | Education | AI | 4.14 | 1.38 |
| Human | 5.05 | 1.25 | ||
| Age | AI | 4.39 | 1.39 | |
| Human | 5.30 | 1.06 | ||
| Gender | AI | 4.49 | 1.48 | |
| Human | 5.28 | 1.23 | ||
| Moral cognition | Education | AI | 4.73 | 1.27 |
| Human | 5.18 | 0.99 | ||
| Age | AI | 4.78 | 1.32 | |
| Human | 5.22 | 1.10 | ||
| Gender | AI | 4.53 | 1.28 | |
| Human | 5.14 | 1.21 | ||
| Moral emotion | Education | AI | 4.26 | 1.60 |
| Human | 5.44 | 1.27 | ||
| Age | AI | 4.44 | 1.70 | |
| Human | 5.39 | 1.45 | ||
| Gender | AI | 4.02 | 1.51 | |
| Human | 5.13 | 1.39 | ||
| Moral behavior | Education | AI | 4.08 | 1.66 |
| Human | 5.31 | 1.29 | ||
| Age | AI | 4.18 | 1.77 | |
| Human | 5.28 | 1.45 | ||
| Gender | AI | 3.75 | 1.70 | |
| Human | 4.86 | 1.51 |
Table 1 Descriptive statistics of moral responses and scores across dimensions for different agents in discrimination scenarios
| Variable | Discrimination | Agent | M | SD |
|---|---|---|---|---|
| Moral response | Education | AI | 4.14 | 1.38 |
| Human | 5.05 | 1.25 | ||
| Age | AI | 4.39 | 1.39 | |
| Human | 5.30 | 1.06 | ||
| Gender | AI | 4.49 | 1.48 | |
| Human | 5.28 | 1.23 | ||
| Moral cognition | Education | AI | 4.73 | 1.27 |
| Human | 5.18 | 0.99 | ||
| Age | AI | 4.78 | 1.32 | |
| Human | 5.22 | 1.10 | ||
| Gender | AI | 4.53 | 1.28 | |
| Human | 5.14 | 1.21 | ||
| Moral emotion | Education | AI | 4.26 | 1.60 |
| Human | 5.44 | 1.27 | ||
| Age | AI | 4.44 | 1.70 | |
| Human | 5.39 | 1.45 | ||
| Gender | AI | 4.02 | 1.51 | |
| Human | 5.13 | 1.39 | ||
| Moral behavior | Education | AI | 4.08 | 1.66 |
| Human | 5.31 | 1.29 | ||
| Age | AI | 4.18 | 1.77 | |
| Human | 5.28 | 1.45 | ||
| Gender | AI | 3.75 | 1.70 | |
| Human | 4.86 | 1.51 |
| Model | χ2 | df | χ2/df | CFI | TLI | RMSEA | SRMR |
|---|---|---|---|---|---|---|---|
| One-factor model | 2749.91 | 189 | 14.55 | 0.70 | 0.67 | 0.19 | 0.14 |
| Two-factor model | 1030.31 | 188 | 5.49 | 0.90 | 0.89 | 0.11 | 0.04 |
| Five-factor model | 519.87 | 179 | 2.90 | 0.96 | 0.95 | 0.07 | 0.03 |
Table 2 Results of Confirmatory Factor Analysis
| Model | χ2 | df | χ2/df | CFI | TLI | RMSEA | SRMR |
|---|---|---|---|---|---|---|---|
| One-factor model | 2749.91 | 189 | 14.55 | 0.70 | 0.67 | 0.19 | 0.14 |
| Two-factor model | 1030.31 | 188 | 5.49 | 0.90 | 0.89 | 0.11 | 0.04 |
| Five-factor model | 519.87 | 179 | 2.90 | 0.96 | 0.95 | 0.07 | 0.03 |
| Variables | M | SD | 1 | 2 | 3 | 4 |
|---|---|---|---|---|---|---|
| Decision-maker | 0.51 | 0.50 | 1 | |||
| Perceived efficacy | 5.16 | 1.76 | 0.75*** | 1 | ||
| Perceived experience | 4.95 | 1.93 | 0.80*** | 0.83*** | 1 | |
| Moral response | 5.38 | 1.31 | 0.42*** | 0.48*** | 0.47*** | 1 |
Table 3 Descriptive Statistics and Correlations among Variables
| Variables | M | SD | 1 | 2 | 3 | 4 |
|---|---|---|---|---|---|---|
| Decision-maker | 0.51 | 0.50 | 1 | |||
| Perceived efficacy | 5.16 | 1.76 | 0.75*** | 1 | ||
| Perceived experience | 4.95 | 1.93 | 0.80*** | 0.83*** | 1 | |
| Moral response | 5.38 | 1.31 | 0.42*** | 0.48*** | 0.47*** | 1 |
| [1] | Angwin J., Larson J., Mattu S., & Kirchner L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing |
| [2] |
Awad E., Levine S., Kleiman-Weiner M., Dsouza S., Tenenbaum J. B., Shariff A., … Rahwan I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134-143.
doi: 10.1038/s41562-019-0762-8 pmid: 31659321 |
| [3] |
Bartlett R., Morse A., Stanton R., & Wallace N. (2022). Consumer- lending discrimination in the FinTech Era. Journal of Financial Economics, 143(1), 30-56.
doi: 10.1016/j.jfineco.2021.05.047 URL |
| [4] |
Behdadi D., & Munthe C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30(2), 195-218.
doi: 10.1007/s11023-020-09525-8 |
| [5] |
Bigman Y. E., & Gray K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
doi: S0010-0277(18)30208-7 pmid: 30107256 |
| [6] |
Bigman Y. E., Waytz A., Alterovitz R., & Gray K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365-368.
doi: S1364-6613(19)30063-4 pmid: 30962074 |
| [7] |
Bigman Y. E., Wilson D., Arnestad M. N., Waytz A., & Gray K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4-27.
doi: 10.1037/xge0001250 URL |
| [8] |
Bonezzi A., & Ostinelli M. (2021). Can algorithms legitimize discrimination? Journal of Experimental Psychology: Applied, 27(2), 447-459.
doi: 10.1037/xap0000294 URL |
| [9] |
Burgoon J. K., Newton D. A., Walther J. B., & Baesler E. J. (1989). Nonverbal expectancy violations and conversational involvement. Journal of Nonverbal Behavior, 13(2), 97-119.
doi: 10.1007/BF00990793 URL |
| [10] |
Chakroff A., & Young L. (2015). Harmful situations, impure people: An attribution asymmetry across moral domains. Cognition, 136, 30-37.
doi: 10.1016/j.cognition.2014.11.034 pmid: 25490126 |
| [11] |
Chang X. (2023). Gender bias in hiring: An analysis of the impact of Amazon’s recruiting algorithm. Advances in Economics, Management and Political Sciences, 23(1), 134-140.
doi: 10.54254/2754-1169/23/20230367 URL |
| [12] | Cohen J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge. |
| [13] |
Crolic C., Thomaz F., Hadi R., & Stephen A. T. (2022). Blame the bot: Anthropomorphism and anger in customer-chatbot interactions. Journal of Marketing, 86(1), 132-148.
doi: 10.1177/00222429211045687 URL |
| [14] |
Curran P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 4-19.
doi: 10.1016/j.jesp.2015.07.006 URL |
| [15] |
Danaher J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299-309.
doi: 10.1007/s10676-016-9403-3 URL |
| [16] | Dastin J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In K. Martin (Ed), Ethics of data and analytics (pp. 296-299). Auerbach Publications. |
| [17] |
de Vel-Palumbo M., Ferguson R., Schein C., Chang M. X.-L., & Bastian B. (2022). Morally excused but socially excluded: Denying agency through the defense of mental impairment. PLoS ONE, 17(7), e0272061.
doi: 10.1371/journal.pone.0272061 URL |
| [18] |
Decety J., & Cowell J. M. (2018). Interpersonal harm aversion as a necessary foundation for morality: A developmental neuroscience perspective. Development and Psychopathology, 30(1), 153-164.
doi: 10.1017/S0954579417000530 pmid: 28420449 |
| [19] | Douglas B. D., Ewell P. J., & Brauer M. (2023). Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. Plos One, 18(3), e0279720. |
| [20] |
Duffy B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3-4), 177-190.
doi: 10.1016/S0921-8890(02)00374-3 URL |
| [21] |
Faul F., Erdfelder E., Lang A., & Buchner A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191.
doi: 10.3758/bf03193146 pmid: 17695343 |
| [22] |
Ge X. (2023). Experimentally manipulating mediating processes: Why and how to examine mediation using statistical moderation analyses. Journal of Experimental Social Psychology, 109, 104507.
doi: 10.1016/j.jesp.2023.104507 URL |
| [23] |
Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
pmid: 17272713 |
| [24] |
Gray K., Waytz A., & Young L. (2012). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23(3), 206-215.
doi: 10.1080/1047840X.2012.686247 URL |
| [25] |
Gray K., & Wegner D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
doi: 10.1016/j.cognition.2012.06.007 pmid: 22784682 |
| [26] |
Grazzini L., Viglia G., & Nunan D. (2023). Dashed expectations in service experiences. Effects of robots human-likeness on customers’ responses. European Journal of Marketing, 57(4), 957-986.
doi: 10.1108/EJM-03-2021-0220 URL |
| [27] | Griffith R. L., & Peterson M. H. (Eds.). (2006). A closer examination of applicant faking behavior. IAP. |
| [28] |
Grimes G. M., Schuetzler R. M., & Giboney J. S. (2021). Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144, 113515.
doi: 10.1016/j.dss.2021.113515 URL |
| [29] | Guidi S., Marchigiani E., Roncato S., & Parlangeli O. (2021). Human beings and robots: Are there any differences in the attribution of punishments for the same crimes? Behaviour & Information Technology, 40(5), 445-453. |
| [30] |
Gursoy D., Chi O. H., Lu L., & Nunkoo R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169.
doi: 10.1016/j.ijinfomgt.2019.03.008 URL |
| [31] |
Hauser D. J., Ellsworth P. C., & Gonzalez R. (2018). Are manipulation checks necessary? Frontiers in psychology, 9, 998.
doi: 10.3389/fpsyg.2018.00998 pmid: 29977213 |
| [32] | Hayes A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. The Guilford Press. |
| [33] |
Heilman M. E., Caleo S., & Manzi F. (2024). Women at work: Pathways from gender stereotypes to gender bias and discrimination. Annual Review of Organizational Psychology and Organizational Behavior, 11(1), 165-192.
doi: 10.1146/orgpsych.2024.11.issue-1 URL |
| [34] | Heinrichs B., Heinrichs J.-H., & Rüther M. (2022). Künstliche Intelligenz. De Gruyter. |
| [35] |
Hohenstein J., & Jung M. (2020). AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior, 106, 106190.
doi: 10.1016/j.chb.2019.106190 URL |
| [36] |
Hong J. W., Cruz I., & Williams D. (2021). AI, you can drive my car: How we evaluate human drivers vs. self-driving cars. Computers in Human Behavior, 125, 106944.
doi: 10.1016/j.chb.2021.106944 URL |
| [37] |
Hu L., & Bentler P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55.
doi: 10.1080/10705519909540118 URL |
| [38] | Hu X., Li M., Wang D., & Yu F. (2024). Reactions to immoral AI decisions: The moral deficit effect and its underlying mechanism. Chinese Science Bulletin, 69(11), 1406-1416. |
| [39] |
Jackson D. L. (2003). Revisiting sample size and number of parameter estimates: Some support for the N: Q hypothesis. Structural Equation Modeling: A Multidisciplinary Journal, 10(1), 128-141.
doi: 10.1207/S15328007SEM1001_6 URL |
| [40] | Kamide H., Eyssel F., & Arai T. (2013). Psychological anthropomorphism of robots:Measuring mind perception and humanity in Japanese context. In G. Herrmann, M. Pearson, A. Lenz, P. Bremner, A. Spiers, & U. Leonards (Eds.), Social robotics (pp. 199-208). Springer. |
| [41] |
Koo T. K., & Li M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155-163.
doi: 10.1016/j.jcm.2016.02.012 pmid: 27330520 |
| [42] |
Laakasuo M., Palomäki J., & Köbis N. (2021). Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. International Journal of Social Robotics, 13(7), 1679-1688.
doi: 10.1007/s12369-020-00738-6 |
| [43] |
Langer M., & Landers R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878.
doi: 10.1016/j.chb.2021.106878 URL |
| [44] |
Leo X., & Huh Y. E. (2020). Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Computers in Human Behavior, 113, 106520.
doi: 10.1016/j.chb.2020.106520 URL |
| [45] |
Lew Z., & Walther J. B. (2023). Social scripts and expectancy violations: Evaluating communication with human or AI chatbot interactants. Media Psychology, 26(1), 1-16.
doi: 10.1080/15213269.2022.2084111 URL |
| [46] |
Leyens S. (2000). La conscience imaginée: Sur l’éliminativisme de Daniel Dennett. Revue philosophique de Louvain, 98(4), 761-782.
doi: 10.2143/RPL.98.4.542009 URL |
| [47] | Lima G., Kim C., Ryu S., Jeon C., & Cha M. (2020). Collecting the Public Perception of AI and Robot Rights. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-24. |
| [48] | Lin H., Chi O. H., & Gursoy D. (2020). Antecedents of customers' acceptance of artificially intelligent robotic device use in hospitality services. Journal of Hospitality Marketing & Management, 29(5), 530-549. |
| [49] | Lin M., Cui X., Wang J., Wu G., & Lin J. (2022). Promotors or inhibitors? Role of task type on the effect of humanoid service robots on consumers' use intention. Journal of Hospitality Marketing & Management, 31(6), 710-729. |
| [50] | Little R. J., & Rubin D. B. (2019). Statistical analysis with missing data. John Wiley & Sons. |
| [51] |
Liu P., Yang R., & Xu Z. (2019). How safe is safe enough for self-driving vehicles? Risk Analysis, 39(2), 315-325.
doi: 10.1111/risa.13116 pmid: 29783277 |
| [52] |
Lynn M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382-386.
pmid: 3640358 |
| [53] | Magrani E. (2019). New perspectives on ethics and the laws of artificial intelligence. Internet Policy Review, 8(3), 1-19. https://doi.org/10.14763/2019.3.1420 |
| [54] | Malle B. F. (2019). How many dimensions of mind perception really are there? In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st annual meeting of the cognitive science society (pp. 2268-2274). Cognitive Science Society. |
| [55] |
Mancosu M., Ladini R., & Vezzoni C. (2019). ‘Short is better’. Evaluating the attentiveness of online respondents through screener questions in a real survey environment. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 141(1), 30-45. https://doi.org/10.1177/0759106318812788
doi: 10.1177/0759106318812788 URL |
| [56] |
Maninger T., & Shank D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5, 100154.
doi: 10.1016/j.chbr.2021.100154 URL |
| [57] |
Melián-González S., Gutierrez-Tano D., & Bulchand-Gidumal J. (2021). Predicting the intentions to use chatbots for travel and tourism. Current Issues in Tourism, 24(2), 192-210.
doi: 10.1080/13683500.2019.1706457 URL |
| [58] |
Nass C., & Moon Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
doi: 10.1111/josi.2000.56.issue-1 URL |
| [59] |
Nijssen S. R. R., Müller B. C. N., Bosse T., & Paulus M. (2023). Can you count on a calculator? The role of agency and affect in judgments of robots as moral agents. Human-Computer Interaction, 38(5-6), 400-416.
doi: 10.1080/07370024.2022.2080552 URL |
| [60] |
Obermeyer Z., Powers B., Vogeli C., & Mullainathan S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
doi: 10.1126/science.aax2342 pmid: 31649194 |
| [61] |
Pavone G., Meyer-Waarden L., & Munzel A. (2023). Rage against the machine: Experimental insights into customers’ negative emotional responses, attributions of responsibility, and coping strategies in artificial intelligence-based service failures. Journal of Interactive Marketing, 58(1), 52-71.
doi: 10.1177/10949968221134492 URL |
| [62] |
Polit D. F., & Beck C. T. (2006). The content validity index: Are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489-497.
doi: 10.1002/(ISSN)1098-240X URL |
| [63] |
Polit D. F., Beck C. T., & Owen S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459-467.
doi: 10.1002/nur.v30:4 URL |
| [64] |
Qian Y., & Wan X. (2024). Influence of robot anthropomorphism on consumer attitudes toward restaurants and service providers. International Journal of Hospitality Management, 123, 103939.
doi: 10.1016/j.ijhm.2024.103939 URL |
| [65] | Rai A., Constantinides P., & Sarker S. (2019). Next generation digital platforms: Toward human-AI hybrids. Management Information Systems Quarterly, 43(s1), iii-ix. |
| [66] |
Schniter E., Shields T. W., & Sznycer D. (2020). Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology, 78, 102253.
doi: 10.1016/j.joep.2020.102253 URL |
| [67] | Shank D. B., DeSanti A., & Maninger T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648-663. |
| [68] | Shank D. B., North M., Arnold C., & Gamez P. (2021). Can mind perception explain virtuous character judgments of artificial intelligence? Technology, Mind, and Behavior, 2( 3), 1-13. https://doi.org/ 10.1037/tmb0000047 |
| [69] | Song F., & Yeung S. H. F. (2024). A pluralist hybrid model for moral AIs. AI & Society, 39(3), 891-900. |
| [70] |
Srinivasan R., & Sarial-Abi G. (2021). When algorithms fail: Consumers’ responses to brand harm crises caused by algorithm errors. Journal of Marketing, 85(5), 74-91.
doi: 10.1177/0022242921997082 URL |
| [71] |
Sullivan Y. W., & Fosso Wamba S. (2022). Moral judgments in the age of artificial intelligence. Journal of Business Ethics, 178(4), 917-943.
doi: 10.1007/s10551-022-05053-w |
| [72] | Tang D. D., & Wen Z. L. (2020). Statistical approaches for testing common method bias: Problems and suggestions. Journal of Psychological Science, 43(1), 215-223. |
| [73] | Wang X., Wu Y. C., Ji X., & Fu H. (2024). Algorithmic discrimination: Examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 7, 1320277. |
| [74] |
Waytz A., Epley N., & Cacioppo J. T. (2010). Social cognition unbound: Insights into anthropomorphism and dehumanization. Current Directions in Psychological Science, 19(1), 58-62.
pmid: 24839358 |
| [75] |
Waytz A., Heafner J., & Epley N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.
doi: 10.1016/j.jesp.2014.01.005 URL |
| [76] |
Weisman K., Dweck C. S., & Markman E. M. (2017). Rethinking people’s conceptions of mental life. Proceedings of the National Academy of Sciences of the United States of America, 114(43), 11374-11379.
doi: 10.1073/pnas.1704347114 pmid: 29073059 |
| [77] |
Wen Z. L., & Ye B. J. (2014). Analyses of mediating effects: The development of methods and models. Advances in Psychological Science, 22(5), 731-745.
doi: 10.3724/SP.J.1042.2014.00731 |
| [78] |
Wilson A., Stefanik C., & Shank D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations? Computers in Human Behavior Reports, 8, 100229.
doi: 10.1016/j.chbr.2022.100229 URL |
| [79] |
Xiao H., Marie A., & Strickland B. (2024). Moral commitment to gender equality increases (mis)perceptions of gender bias in hiring. European Journal of Social Psychology, 54(6), 1211-1227.
doi: 10.1002/ejsp.v54.6 URL |
| [80] |
Xu L., Yu F., & Peng K. (2022). Algorithmic discrimination causes less desire for moral punishment than human discrimination. Acta Psychologica Sinica, 54(9), 1076-1092.
doi: 10.3724/SP.J.1041.2022.01076 |
| [81] |
Zhang S., Lin X., Li X., & Ren A. (2022). Service robots’ anthropomorphism: Dimensions, factors and internal relationships. Electronic Markets, 32(1), 277-295.
doi: 10.1007/s12525-022-00527-1 |
| [82] |
Zhu Y., & Chu J. (2025). Should we express gratitude in human-AI interaction: The online public's moral stance toward artificial intelligence assistants in China. Public Understanding of Science, 34(6), 717-733.
doi: 10.1177/09636625251314337 URL |
| [1] | LI Kai, XU Liying, LIU Caimeng, YU Feng. How social class influences attitudes towards artificial intelligence: The mediating role of mobility beliefs [J]. Acta Psychologica Sinica, 2026, 58(2): 235-246. |
| [2] | ZHANG Yanbo, HUANG Feng, MO Liuling, LIU Xiaoqian, ZHU Tingshao. Suicidal ideation data augmentation and recognition technology based on large language models [J]. Acta Psychologica Sinica, 2025, 57(6): 987-1000. |
| [3] | WU Michael Shengtao, PENG Kaiping. Human advantages and psychological transformations in the era of artificial intelligence [J]. Acta Psychologica Sinica, 2025, 57(11): 1879-1884. |
| [4] | LI Bin, RUI Jianxi, YU Weinan, LI Aimei, YE Maolin. When design meets AI: The impact of AI design products on consumers’ response patterns [J]. Acta Psychologica Sinica, 2025, 57(11): 1914-1932. |
| [5] | WEI Xinni, YU Feng, PENG Kaiping. Perceived unsustainability decreases acceptance of artificial intelligence [J]. Acta Psychologica Sinica, 2025, 57(11): 1973-1987. |
| [6] | HUANG Feng, DING Huimin, LI Sijia, HAN Nuo, DI Yazheng, LIU Xiaoqian, ZHAO Nan, LI Linyan, ZHU Tingshao. Self-help AI psychological counseling system based on large language models and its effectiveness evaluation [J]. Acta Psychologica Sinica, 2025, 57(11): 2022-2042. |
| [7] | YE Weiling, XU Su, ZHOU Xinyue. Impact of repeated two-syllable brand names on consumer ethical responses in different moral contexts: A mind perception theory perspective [J]. Acta Psychologica Sinica, 2024, 56(5): 650-669. |
| [8] | ZHAO Yijun, XU Liying, YU Feng, JIN Wanglong. Perceived opacity leads to algorithm aversion in the workplace [J]. Acta Psychologica Sinica, 2024, 56(4): 497-514. |
| [9] | WANG Chen, CHEN Weicong, HUANG Liang, HOU Suyu, WANG Yiwen. Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis [J]. Acta Psychologica Sinica, 2024, 56(2): 194-209. |
| [10] | XU Liying, YU Feng, PENG Kaiping. Algorithmic discrimination causes less desire for moral punishment than human discrimination [J]. Acta Psychologica Sinica, 2022, 54(9): 1076-1092. |
| [11] | GONG Shaoying, SHANGGUAN Chenyu, ZHAI Kuihu, GUO Yawei. The effects of emotional design on multimedia learning [J]. Acta Psychologica Sinica, 2017, 49(6): 771-782. |
| [12] | FAN Liangyan;FAN Xiaofang;LUO Weichao;WU Gonghang;YAN Xu;YIN Dazhi;LV Yue;ZHU Mingjun;XU Dongrong. An Explorative fMRI Study of Human Creative Thinking Using A Specially Designed iCAD System [J]. Acta Psychologica Sinica, 2014, 46(4): 427-436 . |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||