[1] Aher G., Arriaga R. I., & Kalai A. T. (2023). Using large language models to simulate multiple humans and replicate human subject studies.Proceedings of the 40th International Conference on Machine Learning, 1-35. [2] Batool A., Zowghi D., & Bano M. (2025). AI governance: A systematic literature review. AI and Ethics, 5, 3265-3279. [3] Dang, J., & Liu, L. (2024). Extended artificial intelligence aversion: People deny humanness to artificial intelligence users. Journal of Personality and Social Psychology. Advance online publication. https://doi.org/10.1037/pspi0000480 [4] Deng L. F., Pei B., & Gao T. A. (2025). The factors affecting subthreshold depression for people with occupational stress in the era of digital intelligence: Machine learning-based evidence. Acta Psychologica Sinica, 57(11), 2001-2021. [邓丽芳, 裴蓓, 高天艾. (2025). 数智时代工作紧张人群阈下抑郁的影响因素:基于机器学习的证据.心理学报, 57(11), 2001-2021.] [5] Dillion D., Tandon N., Gu Y., & Gray K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences, 27(7), 597-600. [6] Farrell, H. (2025). AI as Governance.Annual Review of Political Science, 28(1), 375-392. [7] Farrell H., Gopnik A., Shalizi C., & Evans J. (2025). Large AI models are cultural and social technologies.Science, 387(6739), 1153-1156. [8] Geng X. W., Liu C., Su L., Han B. X., Zhang Q. M., & Wu M. Z. (2025). Human-AI cooperation makes individuals more risk seeking: The mediating role of perceived agentic responsibility.Acta Psychologica Sinica, 57(11), 1885-1900. [耿晓伟, 刘超, 苏黎, 韩冰雪, 张巧明, 吴明证. (2025). 人机合作使人更冒险:主体责任感的中介作用.心理学报, 57(11), 1885-1900.] [9] Gigerenzer, G. (2024). Psychological AI: Designing algorithms informed by human psychology.Perspectives on Psychological Science, 19(5), 839-848. [10] Gray K., Yam K. C., Zhen’An, A. E., Dillion, D., & Waytz, A. (2025). The psychology of robots and artificial intelligence. In D. T. Gilbert, S. T. Fiske, E. J. Finkel, & W. B. Mendes (Eds.), The handbook of social psychology (6th ed.). Situational Press, Cambridge, MA. [11] Hothersall D.,& Lovett, B. J. (2022). History of psychology. Cambridge University Press. [12] Huang F., Ding H. M., Li S. J., Han N., Di Y. Z., Liu X. Q., Zhao N., Li L. Y., & Zhu T. S. (2025). Self-help AI psychological counseling system based on large language models and its effectiveness evaluation.Acta Psychologica Sinica, 57(11), 2022-2042. [黄峰, 丁慧敏, 李思嘉, 韩诺, 狄雅政, 刘晓倩, 赵楠, 李林姸, 朱廷劭. (2025). 基于大语言模型的自助式AI心理咨询系统及其效果评估.心理学报, 57(11), 2022-2042.] [13] Huang L., Zhang W., Chen Z., Li C., & Li X. (2025). The historical origins of large language models and psychology. Journal of Psychological Science, 48(4), 773-781. [黄林洁琼, 张雯, 陈珍, 李晨曦, 李兴珊. (2025). 大语言模型与心理学的历史渊源. 心理科学, 48(4), 773-781.] [14] Jin, S., & Liu, C. (2025). From human mind to artificial intelligence: Advancing AI value alignment through psychological theories. Journal of Psychological Science, 48(4), 782-791. [晋少雄, 刘超. (2025). 由“心”及“智”:心理学研究促进AI价值观对齐的路径探讨.心理科学, 48(4), 782-791.] [15] Kieslich K., Lünich M., & Marcinkowski F. (2021). The threats of artificial intelligence scale (TAI) development, measurement and test over three application domains.International Journal of Social Robotics, 13, 1563-1577. [16] Kissinger H. A., Schmidt E., & Mundie C. (2024). Genesis: Artificial Intelligence, hope, and the human spirit. Little, Brown. [17] Ke L., Tong S., Cheng P., & Peng K. (2025). Exploring the frontiers of LLMs in psychological applications: A comprehensive review. Artificial Intelligence Review, 58, 305-340. https://doi.org/10.1007/s10462-025-11297-5 [18] Kosinski M., Matz S. C., Gosling S. D., Popov V., & Stillwell D. (2015). Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines.American psychologist, 70(6), 543-556. [19] Lewis, P. R., & Marsh, S. (2022). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence.Cognitive Systems Research, 72, 33-49. [20] Li B., Rui J. X., Yu W. N., Li A. M., & Ye M. L. (2025). When design meets AI: The impact of AI design products on consumers’ response patterns.Acta Psychologica Sinica, 57(11), 1914-1932. [李斌, 芮建禧, 俞炜楠, 李爱梅, 叶茂林. (2025). 当设计遇见AI:人工智能设计产品对消费者响应模式的影响.心理学报, 57(11), 1914-1932.] [21] Li, C., & Qi, Y. (2025). Toward accurate psychological simulations: Investigating LLMs’ responses to personality and cultural variables.Computers in Human Behavior, 170, 108687. [22] Li, H. (2020). From technology as agent to technology as substitute: Human obsolescence? Social Sciences in China, (10), 116-140+207. [李河. (2020). 从“代理”到“替代”的技术与正在“过时”的人类? 中国社会科学, (10), 116-140+207.] [23] Lin, Z. (2025). We need to rein in AI’s gatekeeping of science. Nature, 645, 285. [24] Lin, Z. (2025). Six fallacies in substituting large language models for human participants.Advances in Methods and Practices in Psychological Science, 8(3), 1-19. [25] Lindgren, S. (2024). Critical theory of AI. Cambridge: Polity Press. [26] Lu J. G., Song L. L., & Zhang L. D. (2025). Cultural tendencies in generative AI. Nature Human Behaviour, https://doi.org/10.1038/s41562-025-02242-1 [27] Mei Q., Xie Y., Yuan W., & Jackson M. O. (2024). A Turing test of whether AI chatbots are behaviorally similar to humans.Proceedings of the National Academy of Sciences, 121(9), e2313925121. [28] Meng, J. (2024). AI emerges as the frontier in behavioral science.Proceedings of the National Academy of Sciences, 121(10), e2401336121. [29] Mitchell, M. (2024). The metaphors of artificial intelligence. Science, 386(6723), eadt6140. [30] Peng K.,& Yan, W. (2021). Children's character: Positive psychology for parents. Beijing: Citic Press. [彭凯平, 闫伟. (2021). 孩子的品格:写给父母的积极心理学. 北京:中信出版社.] [31] Pink, D. H. (2006). A whole new mind: Why right-brainers will rule the future. Penguin. [32] Shanahan M., McDonell K., & Reynolds L. (2023). Role play with large language models.Nature, 623(7987), 493-498. [33] Song, X., & Lin, Z. (2025). Beyond the existence-utility binary: How AI reveals our hybrid self. AI & Society. [34] Sun, W. (2020). Artificial inteligence and the “new alienation" of human beings. Social Sciences in China, 12, 119-137. [孙伟平. (2020). 人工智能与人的 “新异化”. 中国社会科学, 12, 119-137.] [35] Tang X. F., Wang C. M., Sun X. D., & Zhang E. Z. (2025). Impact of trusting humanoid intelligent robots on employees’ job dedication intentions: An investigation based on the classification of human-robot trust.Acta Psychologica Sinica, 57(11), 1933-1950. [唐小飞, 王昌梅, 孙晓东, 张恩忠. (2025). 类人智能机器人信任对员工工作贡献意愿的影响:基于人机关系信任的归维考察.心理学报, 57(11), 1933-1950.] [36] Tong, S, Chen, H, Ke, L, Ye, J., & Peng K. (2025). Psychoinformatics: Advances and perspectives in the computational cognition era.Journal of Psychological Science, 48(4), 792-803. [童松, 陈浩, 柯罗马, 叶俊楷, 彭凯平. (2025). 心理信息学:计算认知时代的研究进展. 心理科学, 48(4), 792- 803.] [37] Veale M., Matus K., & Gorwa R. (2023). AI and global governance: Modalities, rationales, tensions.Annual Review of Law and Social Science, 19(1), 255-275. [38] Voudouris K., Cheke L., & Schulz E. (2025). Bringing comparative cognition approaches to AI systems. Nature Reviews Psychology, 4, 363-364. [39] Wang, J. (2023). Self-awareness, a singularity of AI.Philosophy Study, 13(2), 68-77. [40] Wang Y., Zhao J., Ones D. S., He L., & Xu X. (2025). Evaluating the ability of large language models to emulate personality.Scientific Reports, 15(1), 519. [41] Wei X. N., Yu F., & Peng K. P. (2025). Unsustainability decreases acceptance of environmental artificial intelligence.Acta Psychologica Sinica, 57(11), 1973-1987. [魏心妮, 喻丰, 彭凯平. (2025). 低可持续性降低人工智能的接受意愿.心理学报, 57(11), 1973-1987.] [42] Wu Y. T., Wang B., Bao H. W. S., Li R. N., Wu Y., Wang J. Q., Cheng C., & Yang L. (2025). Humans perceive warmth and competence in large language models.Acta Psychologica Sinica, 57(11), 2043-2059. [武月婷, 王博, 包寒吴霜, 李若男, 吴怡, 王嘉琪, 程诚, 杨丽. (2025). 人类对大语言模型的热情和能力感知.心理学报, 57(11), 2043-2059.] [43] Xu L. Y., Zhao Y. J., & Yu F. (2025). Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human.Acta Psychologica Sinica, 57(11), 2060-2082. [许丽颖, 赵一骏, 喻丰. (2025). 人工智能主管提出的道德行为建议更少被遵从.心理学报, 57(11), 2060-2082.] [44] Xu R., Sun Y., Ren M., Guo S., Pan R., Lin H., Sun L., & Han X. (2024). AI for social science and social science of AI: A survey.Information Processing and Management, 61(3), 103665. [45] Xu W., Gao Z. F., & Ge L. Z. (2024). New research paradigms and agenda of human factors science in the intelligence era.Acta Psychologica Sinica, 56(3), 363-382. [许为, 高在峰, 葛列众. (2024). 智能时代人因科学研究的新范式取向及重点.心理学报, 56(3), 363-382.] [46] You S. S., Qi Y., Chen J. T., Luo L., & Zhang K. (2025). Safety trust in intelligent domestic robots: Human and AI perspectives on trust and relevant influencing factors.Acta Psychologica Sinica, 57(11), 1951-1972 [由姗姗, 齐玥, 陈俊廷, 骆磊, 张侃. (2025). 人与AI对智能家居机器人的安全信任及其影响因素.心理学报, 57(11), 1951-1972.] [47] Zhang G., Chong L., Kotovsky K., & Cagan J. (2023). Trust in an AI versus a human teammate: The effects of teammate identity and performance on Human-AI cooperation.Computers in Human Behavior, 139, 107536. [48] Zhou X., Bai B. R., Zhang J. J., & Liu S. R. (2025). Unity without uniformity: Humans’ social creativity strategy under generative artificial intelligence salience.Acta Psychologica Sinica, 57(11), 1901-1913. [周详, 白博仁, 张婧婧, 刘善柔. (2025). 和而不同:生成式人工智能凸显下人类的社会创造策略.心理学报, 57(11), 1901-1913.] [49] Zhou Z. S., Huang Q., Tan Z. H., Liu R., Cao Z. H., Mu F. M., Fan Y. C., & Qin S. Z. (2025). Emotional capabilities evaluation of multimodal large language model in dynamic social interaction scenarios.Acta Psychologica Sinica, 57(11), 1988-2000. [周子森, 黄琪, 谭泽宏, 刘睿, 曹子亨, 母芳蔓, 樊亚春, 秦绍正. (2025). 多模态大语言模型动态社会互动情景下的情感能力测评.心理学报, 57(11), 1988-2000.] |