Acta Psychologica Sinica ›› 2026, Vol. 58 ›› Issue (3): 399-415.doi: 10.3724/SP.J.1041.2026.0399
Previous Articles Next Articles
DAI Yiqing1, MA Xinming2, WU Zhen1,3
Received:2025-05-10
Published:2026-03-25
Online:2025-12-26
CLC Number:
DAI Yiqing, MA Xinming, WU Zhen. (2026). LLMs amplify gendered empathy stereotypes and influence major and career recommendations. Acta Psychologica Sinica, 58(3), 399-415.
Add to citation manager EndNote|Ris|BibTeX
URL: https://journal.psych.ac.cn/acps/EN/10.3724/SP.J.1041.2026.0399
| [1] Acerbi, A., & Stubbersfield, J. M. (2023). Large language models show human-like content biases in transmission chain experiments. [2] Bai X., Wang A., Sucholutsky I., & Griffiths T. L. (2025). Explicitly unbiased large language models still form biased associations. [3] Bates D., Mächler M., Bolker B., & Walker S. (2015). Fitting linear mixed-effects models using lme4.Journal of Statistical Software, 67(1), 1-48. [4] Block K., Croft A.,& Schmader, T.(2018). Worth less?: Why men (and women) devalue care-oriented careers. [5] Bolukbasi T., Chang K. W., Zou J. Y., Saligrama V., & Kalai A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. [6] Bridgstock, R. (2009). The graduate attributes we’ve overlooked: Enhancing graduate employability through career management skills.Higher Education Research & Development, 28(1), 31-44. [7] Cai Y., Cao D., Guo R., Wen Y., Liu G., & Chen E. (2024, August). Locating and mitigating gender bias in large language models. In [8] Chaturvedi, S., & Chaturvedi, R. (2025). Who gets the callback? Generative AI and gender bias. [9] Chen Y., Liu T. X., Shan Y., & Zhong S. (2023). The emergence of economic rationality of GPT. [10] Cheng M., Durmus E., & Jurafsky D. (2023). Marked personas: Using natural language prompts to measure stereotypes in language models. [11] Cheung V., Maier M., & Lieder F. (2025). Large language models show amplified cognitive biases in moral decision- making. [12] Christov-Moore L., Simpson E. A., Coudé G., Grigaityte K., Iacoboni M., & Ferrari P. F. (2014). Empathy: Gender effects in brain and behavior. [13] Croft A., Schmader T., & Block K. (2015). An underexamined inequality: Cultural and psychological barriers to men's engagement with communal roles. [14] Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In K. Martin (Ed.), [15] Decety, J. (2010). The neurodevelopment of empathy in humans. [16] De Waal, F. B. M. (2008). Putting the altruism back into altruism: The evolution of empathy. [17] Dong W., Zhunis A., Jeong D., Chin H., Han J., & Cha M. (2024). Persona setting pitfall: Persistent outgroup biases in large language models arising from social identity adoption. [18] Eagly, A. H., & Koenig, A. M. (2021). The vicious cycle linking stereotypes and social roles. [19] Eagly, A. H., & Steffen, V. J. (1984). Gender stereotypes stem from the distribution of women and men into social roles. [20] Eagly, A. H., & Wood, W. (2012). Social role theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), [21] Eccles, J. (2011). Gendered educational and occupational choices: Applying the Eccles et al. model of achievement- related choices. International Journal of Behavioral Development, 35(3), 195-201. [22] Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. [23] Glickman, M., & Sharot, T. (2025). How human-AI feedback loops alter human perceptual, emotional and social judgements. [24] Gross, N. (2023). What ChatGPT tells us about gender: A cautionary tale about performativity and gender biases in AI. [25] Gupta S., Shrivastava V., Deshpande A., Kalyan A., Clark P., Sabharwal A., & Khot T. (2024). Bias runs deep: Implicit reasoning biases in persona-assigned LLMs. [26] Hoffman, M. L. (1990). Empathy and justice motivation. [27] Holland, J. L. (1997). [28] Jung, C. G. (1968). [29] Kamas L.,& Preston, A.(2021). Empathy, gender, and prosocial behavior. [30] Kaplan D. M., Palitsky R., Arconada Alvarez S. J., Pozzo N. S., Greenleaf M. N., Atkinson C. A., & Lam W. A. (2024). What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. [31] Klein, K. J., & Hodges, S. D. (2001). Gender differences, motivation, and empathic accuracy: When it pays to understand. [32] Kong H., Ahn Y., Lee S., & Maeng Y. (2024). Gender bias in LLM-generated interview responses. [33] Kotek H., Dockum R., & Sun D. (2023, November). Gender bias and stereotypes in large language models. In [34] Liu A., Diab M., & Fried D. (2024). Evaluating large language model biases in persona-steered generation. [35] Löffler, C. S., & Greitemeyer, T. (2023). Are women the more empathetic gender? The effects of gender role expectations. [36] Lu J. G., Song L. L., & Zhang L. D. (2025). Cultural tendencies in generative AI. [37] Martínez-Morato S., Feijoo-Cid M., Galbany-Estragués P., Fernández-Cano M. I., & Arreciado Marañón A. (2021). Emotion management and stereotypes about emotions among male nurses: A qualitative study. [38] Master A., Meltzoff A. N., & Cheryan S. (2021). Gender stereotypes about interests start early and cause gender disparities in computer science and engineering. [39] Murphy M. C.,& Taylor, V. J. (2012). The role of situational cues in signaling and maintaining stereotype threat. In M. Inzlicht & T. Schmader (Eds.), Stereotype threat: Theory, process, and application (pp. 17-33). Oxford University Press. [40] National Bureau of Statistics of China. (2021). China labour statistical yearbook-2021. Beijing: China Statistic Press. [41] [国家统计局. (2021). 中国劳动统计年鉴—2021. 北京: 中国统计出版社. https://www.stats.gov.cn/zs/tjwh/tjkw/tjzl/202302/t20230215_1908005.html] [42] Noble S. U.(2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. [43] Olsson M. I. T., Froehlich L., Dorrough A. R., & Martiny S. E. (2021). The hers and his of prosociality across 10 countries. [44] Ostrow, R., & Lopez, A. (2025). LLMs reproduce stereotypes of sexual and gender minorities. [45] Plaza-del-Arco F. M., Curry A. C., Curry A., Abercrombie G., & Hovy D. (2024). Angry men, sad women: Large language models reflect gendered stereotypes in emotion attribution. [46] Prewitt-Freilino J. L., Caswell T. A., & Laakso E. K. (2012). The gendering of language: A comparison of gender equality in countries with gendered, natural gender, and genderless languages. [47] Rieffe C., Ketelaar L., & Wiefferink C. H. (2010). Assessing empathy in young children: Construction and validation of an Empathy Questionnaire (EmQue). [48] Salinas A., Shah P., Huang Y., McCormack R., & Morstatter F. (2023, October). The unequal opportunities of large language models: Examining demographic biases in job recommendations by ChatGPT and LLaMA. In [49] Sheng E., Chang K. W., Natarajan P.,& Peng, N.(2021). Societal biases in language generation: Progress and challenges. [50] Slobodin O., Samuha T., Hannona-Saban A., & Katz I. (2024). When boys and girls make their first career decisions: Exploring the role of gender and field in high school major choice. [51] Smith M. S., Greaves L., & Mason D. (2025). Early careers survey 2025. [52] Su R., Rounds J., & Armstrong P. I. (2009). Men and things, women and people: A meta-analysis of sex differences in interests. [53] Thomas, G., & Maio, G. R. (2008). Man, I feel like a woman: When and how gender-role motivation helps mind-reading. [54] Torres N., Ulloa C., Araya I., Ayala M., & Jara S. (2024, October). Injecting bias through prompts: Analyzing the influence of language on LLMs. In [55] United Nations Educational, Scientific and Cultural Organization & International Research Centre on Artificial Intelligence. (2024). [56] Wan, Y., & Chang, K. W. (2024). White men lead, black women help? Benchmarking and mitigating language agency social biases in LLMs. [57] Wan Y., Pu G., Sun J., Garimella A., Chang K. W., & Peng N. (2023). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters. [58] Zhao J., Ding Y., Jia C., Wang Y., & Qian Z. (2024). Gender bias in large language models across multiple languages. [59] Zheng, A. (2024). Dissecting bias of ChatGPT in college major recommendations. |
| [1] | ZHOU Lei, LI Litong, WANG Xu, OU Huafeng, HU Qianyu, LI Aimei, GU Chenyan. Large language models capable of distinguishing between single and repeated gambles: Understanding and intervening in risky choice [J]. Acta Psychologica Sinica, 2026, 58(3): 416-436. |
| [2] | WU Shiyu, WANG Yiyun. “Zero-Shot Language Learning”: Can Large Language Models “Acquire” Contextual Emotion Like Humans? [J]. Acta Psychologica Sinica, 2026, 58(2): 308-322. |
| [3] | JIAO Liying, LI Chang-Jin, CHEN Zhen, XU Hengbin, XU Yan. When AI “possesses” personality: Roles of good and evil personalities influence moral judgment in large language models [J]. Acta Psychologica Sinica, 2025, 57(6): 929-946. |
| [4] | GAO Chenghai, DANG Baobao, WANG Bingjie, WU Michael Shengtao. The linguistic strength and weakness of artificial intelligence: A comparison between Large Language Model (s) and real students in the Chinese context [J]. Acta Psychologica Sinica, 2025, 57(6): 947-966. |
| [5] | ZHANG Yanbo, HUANG Feng, MO Liuling, LIU Xiaoqian, ZHU Tingshao. Suicidal ideation data augmentation and recognition technology based on large language models [J]. Acta Psychologica Sinica, 2025, 57(6): 987-1000. |
| [6] | HUA Shan, JIANG Xintong, GAO Yangzhenyu, MU Yan, DU Yi. The impacts of music training and music sophistication on empathy [J]. Acta Psychologica Sinica, 2025, 57(4): 544-558. |
| [7] | HUANG Feng, DING Huimin, LI Sijia, HAN Nuo, DI Yazheng, LIU Xiaoqian, ZHAO Nan, LI Linyan, ZHU Tingshao. Self-help AI psychological counseling system based on large language models and its effectiveness evaluation [J]. Acta Psychologica Sinica, 2025, 57(11): 2022-2042. |
| [8] | WANG Lili, ZHANG Xuan, CHEN Hanyu. Remembering the past makes consumers easier to forgive: The influence of nostalgia on forgiveness and its internal mechanism in service failure [J]. Acta Psychologica Sinica, 2024, 56(4): 515-530. |
| [9] | ZHANG Wenyun, ZHUO Shiwei, ZHENG Qianqian, GUAN Yinglin, PENG Weiwei. Autistic traits influence pain empathy: The mediation role of pain-related negative emotion and cognition [J]. Acta Psychologica Sinica, 2023, 55(9): 1501-1517. |
| [10] | MENG Xianxin, YU Delin, CHEN Yijing, ZHANG Lin, FU Xiaolan. Association between childhood maltreatment and empathy:A three-level meta-analytic review [J]. Acta Psychologica Sinica, 2023, 55(8): 1285-1300. |
| [11] | GUO Xiao-dong, ZHENG Hong, RUAN Dun, HU Ding-ding, WANG Yi, WANG Yan-yu, Raymond C. K. CHAN. Associations between empathy and negative affect: Effect of emotion regulation [J]. Acta Psychologica Sinica, 2023, 55(6): 892-904. |
| [12] | XU Kepeng, OU Qianqian, XUE Hong, LUO Dongli, ZHANG Shuyue, XU Yan. Traditional pettism: The influence of pet ownership status, pet type, and pet properties on pet moral standing [J]. Acta Psychologica Sinica, 2023, 55(10): 1662-1676. |
| [13] | DENG Chenglong, GENG Peng, KUAI Shuguang. The different characteristics of human performance in selecting receding and approaching targets by rotating the head in a 3D virtual environment [J]. Acta Psychologica Sinica, 2023, 55(1): 9-21. |
| [14] | HUANG Xinjie, ZHANG Chi, WAN Huagen, ZHANG Lingcong. Effect of predictability of emotional valence on temporal binding [J]. Acta Psychologica Sinica, 2023, 55(1): 36-44. |
| [15] | YANG Jimei, CHAI Jieyu, QIU Tianlong, QUAN Xiaoshan, ZHENG Maoping. Relationship between empathy and emotion recognition in Chinese national music: An event-related potential study evidence [J]. Acta Psychologica Sinica, 2022, 54(10): 1181-1192. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||