ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2025, Vol. 57 ›› Issue (11): 2043-2059.doi: 10.3724/SP.J.1041.2025.2043

• Reports of Empirical Studies • Previous Articles     Next Articles

Humans perceive warmth and competence in large language models

WU Yueting1, WANG Bo2,3(), BAO Han Wu Shuang4, LI Ruonan1, WU Yi1, WANG Jiaqi1(), CHENG Cheng3, YANG Li3,5()   

  1. 1School of Education, Tianjin University, Tianjin 300354, China
    2College of Intelligence and Computing, Tianjin University, Tianjin 300354, China
    3Institute of Applied Psychology, Tianjin University, Tianjin 300354, China
    4School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
    5Laboratory of Suicidal Behavior Research, Tianjin University, Tianjin 300354, China
  • Published:2025-11-25 Online:2025-09-25
  • Contact: WANG Bo, E-mail: bo_wang@tju.edu.cn; YANG Li, E-mail: yangli@tju.edu.cn

Abstract:

With the continuous advancement of technical capabilities and the extensive penetration of application scenarios of Large Language Models (LLMs), the structure of social interaction is transitioning from the traditionally single interpersonal interaction to a multi-level system integrating interpersonal interaction, human-machine interaction, and machine-machine interaction. In this context, understanding how humans perceive and evaluate LLMs has become an important issue. This research systematically examines the perception patterns of LLMs by humans through three studies. Study 1 found that, consistent with how humans perceive other humans, humans primarily perceive LLMs through two dimensions: warmth and competence. However, in general contexts, unlike the warmth priority in human perception, humans prioritize competence when perceiving LLMs. Study 2 explored the priority effect of warmth and competence in predicting different attitudes. The results show that both warmth and competence positively predict humans’ willingness to continue using LLMs and their liking of LLMs, with competence having a stronger predictive effect on willingness to continue using, and warmth having a stronger predictive effect on liking. Study 3 further explored the differences in human perception of LLMs and others. The results show that humans’ warmth evaluations of LLMs do not differ significantly from those of humans, but their competence evaluations of LLMs are significantly higher than those of humans. This study provides a theoretical basis for understanding human perception of LLMs and offers a new perspective for the design optimization of artificial intelligence and the study of human-machine collaboration mechanisms.

Key words: large language model, social cognition, warmth, competence