[1] Acerbi, A., & Stubbersfield, J. M. (2023). Large language models show human-like content biases in transmission chain experiments. Proceedings of the National Academy of Sciences, 120(44), Article e2313790120. https://doi.org/10.1073/pnas.2313790120 [2] Bai X., Wang A., Sucholutsky I., & Griffiths T. L. (2025). Explicitly unbiased large language models still form biased associations. Proceedings of the National Academy of Sciences, 122(8), Article e2416228122. https://doi.org/10.1073/pnas.2416228122 [3] Bates D., Mächler M., Bolker B., & Walker S. (2015). Fitting linear mixed-effects models using lme4.Journal of Statistical Software, 67(1), 1-48. [4] Block K., Croft A.,& Schmader, T.(2018). Worth less?: Why men (and women) devalue care-oriented careers. Frontiers in Psychology, 9, Article 1353. https://doi.org/10.3389/fpsyg.2018.01353 [5] Bolukbasi T., Chang K. W., Zou J. Y., Saligrama V., & Kalai A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. arXiv preprint arXiv:1607.06520. [6] Bridgstock, R. (2009). The graduate attributes we’ve overlooked: Enhancing graduate employability through career management skills.Higher Education Research & Development, 28(1), 31-44. [7] Cai Y., Cao D., Guo R., Wen Y., Liu G., & Chen E. (2024, August). Locating and mitigating gender bias in large language models. In International Conference on Intelligent Computing (ICIC)(pp. 471-482). Tianjin, China. [8] Chaturvedi, S., & Chaturvedi, R. (2025). Who gets the callback? Generative AI and gender bias. arXiv preprint arXiv:2504.21400. [9] Chen Y., Liu T. X., Shan Y., & Zhong S. (2023). The emergence of economic rationality of GPT. Proceedings of the National Academy of Sciences, 120(51), Article e2316205120. https://doi.org/10.1073/pnas.2316205120 [10] Cheng M., Durmus E., & Jurafsky D. (2023). Marked personas: Using natural language prompts to measure stereotypes in language models. arXiv preprint arXiv: 2305.18189. [11] Cheung V., Maier M., & Lieder F. (2025). Large language models show amplified cognitive biases in moral decision- making. Proceedings of the National Academy of Sciences, 122(25), Article e2412015122. https://doi.org/10.1073/pnas.2412015122 [12] Christov-Moore L., Simpson E. A., Coudé G., Grigaityte K., Iacoboni M., & Ferrari P. F. (2014). Empathy: Gender effects in brain and behavior.Neuroscience & Biobehavioral Reviews, 46, 604-627. [13] Croft A., Schmader T., & Block K. (2015). An underexamined inequality: Cultural and psychological barriers to men's engagement with communal roles.Personality and Social Psychology Review, 19(4), 343-370. [14] Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In K. Martin (Ed.), Ethics of data and analytics(pp. 296-299). Auerbach Publications. [15] Decety, J. (2010). The neurodevelopment of empathy in humans.Developmental Neuroscience, 32(4), 257-267. [16] De Waal, F. B. M. (2008). Putting the altruism back into altruism: The evolution of empathy.Annual Review Psychology, 59, 279-300. [17] Dong W., Zhunis A., Jeong D., Chin H., Han J., & Cha M. (2024). Persona setting pitfall: Persistent outgroup biases in large language models arising from social identity adoption. arXiv preprint arXiv:2409.03843. [18] Eagly, A. H., & Koenig, A. M. (2021). The vicious cycle linking stereotypes and social roles.Current Directions in Psychological Science, 30(4), 343-350. [19] Eagly, A. H., & Steffen, V. J. (1984). Gender stereotypes stem from the distribution of women and men into social roles.Journal of Personality and Social Psychology, 46(4), 735-754. [20] Eagly, A. H., & Wood, W. (2012). Social role theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (Vol.2, pp. 458-476). Sage Publications Ltd. [21] Eccles, J. (2011). Gendered educational and occupational choices: Applying the Eccles et al. model of achievement- related choices. International Journal of Behavioral Development, 35(3), 195-201. [22] Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday,28(11). [23] Glickman, M., & Sharot, T. (2025). How human-AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 9(2), 345-359. [24] Gross, N. (2023). What ChatGPT tells us about gender: A cautionary tale about performativity and gender biases in AI. Social Sciences, 12(8), Article 435. https://doi.org/10.3390/socsci12080435 [25] Gupta S., Shrivastava V., Deshpande A., Kalyan A., Clark P., Sabharwal A., & Khot T. (2024). Bias runs deep: Implicit reasoning biases in persona-assigned LLMs. arXiv preprint arXiv:2311.04892. [26] Hoffman, M. L. (1990). Empathy and justice motivation.Motivation and Emotion, 14(2), 151-172. [27] Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments. Psychological Assessment Resources. [28] Jung, C. G. (1968). The archetypes and the collective unconscious. Routledge & Kegan Paul. [29] Kamas L.,& Preston, A.(2021). Empathy, gender, and prosocial behavior. Journal of Behavioral and Experimental Economics, 92, Article 101654. https://doi.org/10.1016/j.socec.2020.101654 [30] Kaplan D. M., Palitsky R., Arconada Alvarez S. J., Pozzo N. S., Greenleaf M. N., Atkinson C. A., & Lam W. A. (2024). What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. Journal of Medical Internet Research, 26, Article e51837. https://doi.org/10.2196/51837 [31] Klein, K. J., & Hodges, S. D. (2001). Gender differences, motivation, and empathic accuracy: When it pays to understand.Personality and Social Psychology Bulletin, 27(6), 720-730. [32] Kong H., Ahn Y., Lee S., & Maeng Y. (2024). Gender bias in LLM-generated interview responses. arXiv preprint arXiv:2410.20739. [33] Kotek H., Dockum R., & Sun D. (2023, November). Gender bias and stereotypes in large language models. In Proceedings of the ACM collective intelligence conference (CI)(pp. 12-24). New York, United States. [34] Liu A., Diab M., & Fried D. (2024). Evaluating large language model biases in persona-steered generation. arXiv preprint arXiv:2405.20253. [35] Löffler, C. S., & Greitemeyer, T. (2023). Are women the more empathetic gender? The effects of gender role expectations.Current Psychology, 42(1), 220-231. [36] Lu J. G., Song L. L., & Zhang L. D. (2025). Cultural tendencies in generative AI. Nature Human Behaviour. Advance online publication. https://doi.org/10.1038/s41562-025-02242-1 [37] Martínez-Morato S., Feijoo-Cid M., Galbany-Estragués P., Fernández-Cano M. I., & Arreciado Marañón A. (2021). Emotion management and stereotypes about emotions among male nurses: A qualitative study. BMC Nursing, 20(1), Article 114. https://doi.org/10.1186/s12912-021-00641-z [38] Master A., Meltzoff A. N., & Cheryan S. (2021). Gender stereotypes about interests start early and cause gender disparities in computer science and engineering. Proceedings of the National Academy of Sciences, 118(48), Article e2100030118. https://doi.org/10.1073/pnas.2100030118 [39] Murphy M. C.,& Taylor, V. J. (2012). The role of situational cues in signaling and maintaining stereotype threat. In M. Inzlicht & T. Schmader (Eds.), Stereotype threat: Theory, process, and application (pp. 17-33). Oxford University Press. [40] National Bureau of Statistics of China. (2021). China labour statistical yearbook-2021. Beijing: China Statistic Press. [41] [国家统计局. (2021). 中国劳动统计年鉴—2021. 北京: 中国统计出版社. https://www.stats.gov.cn/zs/tjwh/tjkw/tjzl/202302/t20230215_1908005.html] [42] Noble S. U.(2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press. [43] Olsson M. I. T., Froehlich L., Dorrough A. R., & Martiny S. E. (2021). The hers and his of prosociality across 10 countries.British Journal of Social Psychology, 60(4), 1330-1349. [44] Ostrow, R., & Lopez, A. (2025). LLMs reproduce stereotypes of sexual and gender minorities. arXiv preprint arXiv: 2501.05926. [45] Plaza-del-Arco F. M., Curry A. C., Curry A., Abercrombie G., & Hovy D. (2024). Angry men, sad women: Large language models reflect gendered stereotypes in emotion attribution. arXiv preprint arXiv:2403.03121. [46] Prewitt-Freilino J. L., Caswell T. A., & Laakso E. K. (2012). The gendering of language: A comparison of gender equality in countries with gendered, natural gender, and genderless languages.Sex Roles, 66(3), 268-281. [47] Rieffe C., Ketelaar L., & Wiefferink C. H. (2010). Assessing empathy in young children: Construction and validation of an Empathy Questionnaire (EmQue).Personality and Individual Differences, 49(5), 362-367. [48] Salinas A., Shah P., Huang Y., McCormack R., & Morstatter F. (2023, October). The unequal opportunities of large language models: Examining demographic biases in job recommendations by ChatGPT and LLaMA. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO)(pp. 1-15). Boston, United States. [49] Sheng E., Chang K. W., Natarajan P.,& Peng, N.(2021). Societal biases in language generation: Progress and challenges. arXiv preprint arXiv:2105.04054. [50] Slobodin O., Samuha T., Hannona-Saban A., & Katz I. (2024). When boys and girls make their first career decisions: Exploring the role of gender and field in high school major choice.Social Psychology of Education, 27(5), 2455-2478. [51] Smith M. S., Greaves L., & Mason D. (2025). Early careers survey 2025. Prospects Luminate, Jisc. https://graduatemarkettrends.cdn.prismic.io/graduatemarkettrends/aDb6SidWJ-7kSn7u_early-careers-survey-2025.pdf [52] Su R., Rounds J., & Armstrong P. I. (2009). Men and things, women and people: A meta-analysis of sex differences in interests.Psychological Bulletin, 135(6), 859-884. [53] Thomas, G., & Maio, G. R. (2008). Man, I feel like a woman: When and how gender-role motivation helps mind-reading.Journal of Personality and Social Psychology, 95(5), 1165-1179. [54] Torres N., Ulloa C., Araya I., Ayala M., & Jara S. (2024, October). Injecting bias through prompts: Analyzing the influence of language on LLMs. In 2024 43rd International Conference of the Chilean Computer Science Society (SCCC)(pp.1-8). Temuco, Chile. [55] United Nations Educational, Scientific and Cultural Organization & International Research Centre on Artificial Intelligence. (2024). Challenging systematic prejudices: An investigation into bias against women and girls in large language models. https://unesdoc.unesco.org/ark:/48223/pf0000388971. [56] Wan, Y., & Chang, K. W. (2024). White men lead, black women help? Benchmarking and mitigating language agency social biases in LLMs. arXiv preprint arXiv: 2404.10508. [57] Wan Y., Pu G., Sun J., Garimella A., Chang K. W., & Peng N. (2023). “Kelly is a warm person, Joseph is a role model”: Gender biases in LLM-generated reference letters. arXiv preprint arXiv:2310.09219. [58] Zhao J., Ding Y., Jia C., Wang Y., & Qian Z. (2024). Gender bias in large language models across multiple languages. arXiv preprint arXiv:2403.00277. [59] Zheng, A. (2024). Dissecting bias of ChatGPT in college major recommendations.Information Technology and Management, 26, 625-636. |