Loading...
ISSN 0439-755X
CN 11-1911/B

Archive

    For Selected: Toggle Thumbnails
    The special issue on “Artificial Intelligence Psychology and Governance”
    Human advantages and psychological transformations in the era of artificial intelligence
    WU Michael Shengtao, PENG Kaiping
    2025, 57 (11):  1879-1884.  doi: 10.3724/SP.J.1041.2025.1879
    Abstract ( 313 )  
    In the era of artificial intelligence (AI), the boundaries between humans and machines have become blurred, and re-understanding and developing humanity's unique advantages are increasingly prominent and urgent. Meanwhile, with the rapid development of technology and scientific paradigm, a broad psychology encompassing the minds and behaviors of humans, animals, and machines is emerging. Recent researchers have conducted a series of studies on the psychology and governance of AI, from the perspectives of impacts of AI, new human-machine relationships, AI methods, and interdisciplinary empowerment. Future psychology researchers should focus on human society and future development, and reflect on the status of humanity and human dignity under the impact of AI, especially the unique advantages derived from human evolution as well as the expansions of human nature and identity; truly master and utilize AI technologies to empower the development of psychology, making mind research on the black box of human consciousness and complex social behavior more precise and efficient, and promoting AI-based mind computation and intervention across time and space scales and personalized interventions. More important, they must consider how psychology (with strengths in studying human nature, social relations, and ethical values) could empower the development of AI, by exploring AI cognition and its comparison with humans and animals, which is critical for promoting the AI application and governance in a human-machine symbiotic society.
    References | Related Articles | Metrics
    Reports of Empirical Studies
    Human-AI cooperation makes individuals more risk seeking: The mediating role of perceived agentic responsibility
    GENG Xiaowei, LIU Chao, SU Li, HAN Bingxue, ZHANG Qiaoming, WU Mingzheng
    2025, 57 (11):  1885-1900.  doi: 10.3724/SP.J.1041.2025.1885
    Abstract ( 223 )   HTML ( 20 )  
    PDF (935KB) ( 391 )  

    With advancements in artificial intelligence (AI), artificial intelligence is increasingly becoming a “helper” for humans. In the process of human-AI cooperation risk decision-making, it is urgent to clarify whether artificial intelligence will encourage human risk-taking behavior and how perceived agentic responsibility can play a role. In order to investigate the impact and mechanism of human-AI cooperation on individual risk decision-making, four experiments were conducted. The results showed that: (1) Participants in the control group (i.e., without cooperation) exhibited the highest risk-taking behavior, while those engaged in human-AI cooperation took greater risks than those in human-human cooperation. (2) Individual agentic responsibility partially mediated the effect of human-AI cooperation on individuals’ risk decision-making. Specifically, participants reported a higher sense of agentic responsibility in human-AI cooperation compared to human-human cooperation, which contributed to increased risk-taking. (3) Outcome feedback significantly moderates the mediating role of individual agentic responsibility regarding the influence of human-AI cooperation (versus human-human cooperation) on individuals’ risk decision-making. Notably, under success conditions, participants attributed greater responsibility to themselves in human-AI collaboration compared to human-human collaboration. Conversely, under failure conditions, there was no significant difference in responsibility attribution between the two types of collaboration.

    Figures and Tables | References | Related Articles | Metrics
    Unity without uniformity: Humans’ social creativity strategy under generative artificial intelligence salience
    ZHOU Xiang, BAI Boren, ZHANG Jingjing, LIU Shanrou
    2025, 57 (11):  1901-1913.  doi: 10.3724/SP.J.1041.2025.1901
    Abstract ( 118 )   HTML ( 9 )  
    PDF (325KB) ( 97 )  
    Figures and Tables | References | Related Articles | Metrics
    When design meets AI: The impact of AI design products on consumers’ response patterns
    LI Bin, RUI Jianxi, YU Weinan, LI Aimei, YE Maolin
    2025, 57 (11):  1914-1932.  doi: 10.3724/SP.J.1041.2025.1914
    Abstract ( 219 )   HTML ( 18 )  
    PDF (3308KB) ( 775 )  

    With the rapid development of Artificial Intelligence (AI) technology, utilizing AI to design products and innovate is a major trend in the future. Based on the stereotype content model, this article explored the effects, mechanisms, and boundary conditions of design source (human vs. AI) and product type (nostalgic vs. innovative) on consumer response patterns (appreciation vs. aversion) through six progressive Studies (N = 1418). The results showed that for nostalgic products, consumers preferred human design, showing AI aversion; for innovative products, consumers preferred AI design, showing AI appreciation, which produced a matching effect of “human design-nostalgic products” and “AI design-innovative products”. Further analysis revealed that processing fluency played a mediating role in this matching effect process; warmth perception and competence perception were key factors that led to processing fluency. In addition, the AI-human collaborative design mode, AI anthropomorphic features, and consumer self-construction types all played a moderating role. This article not only revealed the response patterns and deep mechanisms of consumers' appreciation or aversion towards different types of products designed by AI but also provided references for strategic planning and marketing strategies of AI+ design in the new era of artificial intelligence.

    Figures and Tables | References | Related Articles | Metrics
    Impact of trusting humanoid intelligent robots on employees’ job dedication intentions: An investigation based on the classification of human−robot trust
    TANG Xiaofei, WANG Changmei, SUN Xiaodong, CHANG En-Chung
    2025, 57 (11):  1933-1950.  doi: 10.3724/SP.J.1041.2025.1933
    Abstract ( 103 )   HTML ( 10 )  
    PDF (2060KB) ( 129 )  

    As humanoid intelligent robots (HIRs) become increasingly integrated into organizations to provide emotional and functional support, understanding and fostering human?robot trust has become a critical area of focus. This study explores the formation and impact of human?robot trust from the perspective of the Unique Agent Hypothesis. One qualitative interview study and three quantitative studies found that human?robot trust comprises two distinct dimensions: emotional repair trust and functional aiding trust. Among various forms of human?robot collaboration, self-repair and friendship-repair forms primarily trigger emotional repair trust, while intelligence-aiding and physical-aiding forms more effectively enhance functional aiding trust. Furthermore, employees who develop emotional repair trust (vs. functional aiding trust) in HIRs perceive there to be greater organizational warmth, which in turn strengthens their job dedication intentions. Conversely, employees who develop functional aiding trust (vs. emotional repair trust) in HIRs perceive there to be higher organizational competence, thereby enhancing their job dedication intentions. In addition, interaction orientation and task orientation are introduced as crucial situational moderators, and their moderating effect on the relationship between human?robot trust and perceived organizational warmth and competence is confirmed.

    Figures and Tables | References | Related Articles | Metrics
    Safety trust in intelligent domestic robots: Human and AI perspectives on trust and relevant influencing factors
    YOU Shanshan, QI Yue, CHEN JunTing, LUO Lei, ZHANG Kan
    2025, 57 (11):  1951-1972.  doi: 10.3724/SP.J.1041.2025.1951
    Abstract ( 83 )   HTML ( 6 )  
    PDF (1109KB) ( 892 )  

    As a result of the rapid development of intelligent domestic robot technology, safety concerns have emerged as a new challenge in human?robot trust dynamics. This study explores and validates novel critical dimensions of trust that influence human and AI users’ perceptions of intelligent domestic robots, with a particular focus on safety trust. The research involves three comprehensive studies, each of which addresses different aspects of these dimensions.

    In Study 1, we developed a safety trust scale pertaining specifically to intelligent domestic robots. This scale was rigorously tested to confirm the stability and validity of its three-dimensional structure, which included performance, relational, and safety trust. The scale’s psychometric properties were evaluated on the basis of factor analysis and reliability testing, thereby ensuring that it could accurately measure trust across different contexts and populations.

    Study 2 explored the static characteristics of robots, such as their anthropomorphism, their height, and the visibility of their embedded cameras. We revealed that human participants exhibited higher levels of safety trust toward robots that were shorter in height and had fewer conspicuous cameras. Interestingly, the degree of anthropomorphism was determined to play a significant role in determining participants’ sensitivity to these static features.

    Study 3 expanded the investigation to encompass the dynamic characteristics of robots, such as movement speed, interaction scenario and camera operation (i.e., turning the camera off). The results indicated that slower-moving robots were generally perceived as safer, and higher levels of safety trust were attributed to them. Moreover, the action of turning off a robot’s camera during interactions was observed to significantly enhance safety trust among human users. The study also highlighted the fact that the influence of these dynamic features varied across different interaction scenarios, thus suggesting that situational factors play crucial roles in shaping trust perceptions.

    Furthermore, a comparative analysis between human and AI users revealed a certain degree of consistency in safety trust judgments. Both human and AI users were generally aligned in terms of their trust assessments on the basis of both static and dynamic robot features. However, the AI’s sensitivity to the visibility of robot cameras was notably lower than that of humans, thus suggesting that AI may prioritize different factors in the context of assessing safety trust.

    Overall, the findings of this research provide valuable insights into the design and manufacturing of intelligent domestic robots, including by emphasizing the importance of considering both static and dynamic features in the process of enhancing safety trust. The results also offer theoretical and practical guidance for the development of trust models that can be applied in various intelligent home environments, thereby ultimately contributing to the advancement of human?robot interactions.

    Figures and Tables | References | Related Articles | Metrics
    Perceived unsustainability decreases acceptance of artificial intelligence
    WEI Xinni, YU Feng, PENG Kaiping
    2025, 57 (11):  1973-1987.  doi: 10.3724/SP.J.1041.2025.1973
    Abstract ( 96 )   HTML ( 7 )  
    PDF (629KB) ( 150 )  

    Artificial intelligence (AI) has the potential to facilitate ecological governance and promote sustainable development. However, it also rapidly consumes energy and generates significant carbon emissions, posing challenges to both the natural environment and human survival. Despite these concerns, little research has examined the environmental costs of AI and how people respond to them. This study investigates how perceptions of AI sustainability influence individuals’ willingness to adopt AI in human-machine- environment decision-making contexts, as well as the underlying mechanisms and boundary conditions of this effect. Using a survey and AI-generated attitude descriptors from ChatGPT, a pilot study revealed generally high usage intentions and positive attitudes toward environmentally friendly AI systems. Study 1, consisting of two sub-studies, manipulated perceived AI sustainability (low vs. control) and found that participants exposed to low-sustainability AI reported lower acceptance and reduced support for national AI research. Study 2, using an experiment manipulating the perception of sustainability (low vs. high), replicated these findings and identified moral judgment, rather than agency attribution, as the mediating mechanism. Study 3 explored potential boundary conditions, demonstrating that individuals’ pro-environmental attitudes moderated the observed effects. These findings provide psychological insights into the social governance of AI and offer new perspectives on the relationship between AI and sustainable development.

    Figures and Tables | References | Related Articles | Metrics
    Emotional capabilities evaluation of multimodal large language model in dynamic social interaction scenarios
    ZHOU Zisen, HUANG Qi, TAN Zehong, LIU Rui, CAO Ziheng, MU Fangman, FAN Yachun, QIN Shaozheng
    2025, 57 (11):  1988-2000.  doi: 10.3724/SP.J.1041.2025.1988
    Abstract ( 112 )   HTML ( 8 )  
    PDF (5320KB) ( 130 )  

    Multimodal Large Language Models (MLLMs) can process and integrate multimodal data, such as images and text, providing a powerful tool for understanding human psychology and behavior. Combining classic emotional behavior experimental paradigms, this study compares the emotion recognition and prediction abilities of human participants and two mainstream MLLMs in dynamic social interaction contexts, aiming to disentangle the distinct roles of visual features of conversational characters (images) and conversational content (text) in emotion recognition and prediction.

    The results indicate that the emotion recognition and prediction performance of MLLMs, based on character images and conversational content, exhibits moderate or lower correlations with human participants. Despite a notable gap, MLLMs have begun to demonstrate preliminary capabilities in emotion recognition and prediction similar to human participants in dyadic interactions. Using human performance as a benchmark, the study further compares MLLMs under different conditions: integrating both character images and conversational content, using only character images, or relying solely on conversational content. The results suggest that visual features of character interactions somewhat constrain MLLMs’ basic emotion recognition but effectively facilitate the recognition of complex emotions, while having no significant impact on emotion prediction.

    Additionally, by comparing the emotion recognition and prediction performance of two mainstream MLLMs and different versions of GPT-4, the study finds that, rather than merely increasing the scale of training data, innovations in the underlying technical framework play a more crucial role in enhancing MLLMs’ emotional capabilities in dynamic social interaction contexts. Overall, this study deepens the understanding of the interaction between human visual features and conversational content, fosters interdisciplinary integration between psychology and artificial intelligence, and provides valuable theoretical and practical insights for developing explainable affective computing models and general artificial intelligence.

    Figures and Tables | References | Related Articles | Metrics
    Self-help AI psychological counseling system based on large language models and its effectiveness evaluation
    HUANG Feng, DING Huimin, LI Sijia, HAN Nuo, DI Yazheng, LIU Xiaoqian, ZHAO Nan, LI Linyan, ZHU Tingshao
    2025, 57 (11):  2022-2042.  doi: 10.3724/SP.J.1041.2025.2022
    Abstract ( 253 )   HTML ( 19 )  
    PDF (1396KB) ( 742 )  

    This study aimed to explore the technical feasibility of constructing a self-help AI psychological counseling system based on large language models without relying on real case data, and to evaluate its effectiveness in improving mental health outcomes in general populations. The research was conducted in two phases: First, we developed a self-help AI psychological counseling chatbot system using zero-shot learning and chain-of-thought prompting strategies; Subsequently, we evaluated the system's practical effectiveness through a two-week randomized controlled trial with 202 participants. Results from Experiment 1 demonstrated that the GPT-4o model, after prompt engineering optimization, showed significant improvements in Compliance, Professionalism, Emotional Understanding and Empathy, as well as Consistency and Coherence. Experiment 2 revealed that compared to the control group, participants using the self-help AI psychological counseling chatbot experienced significant short-term improvements in depression, anxiety, and loneliness. Notably, anthropomorphized AI counselors demonstrated significant advantages in alleviating loneliness, while non-anthropomorphized designs were more effective in reducing stress. Additionally, improvements in anxiety symptoms persisted at one-week follow-up, while improvements in other indicators did not sustain. This study preliminarily explores the positive impact of LLM-based self-help AI psychological counseling on mental health, revealing differential effects of various AI designs on specific psychological issues, and provides valuable insights for future research and practice.

    Figures and Tables | References | Related Articles | Metrics
    Humans perceive warmth and competence in large language models
    WU Yueting, WANG Bo, BAO Han Wu Shuang, LI Ruonan, WU Yi, WANG Jiaqi, CHENG Cheng, YANG Li
    2025, 57 (11):  2043-2059.  doi: 10.3724/SP.J.1041.2025.2043
    Abstract ( 144 )   HTML ( 10 )  
    PDF (769KB) ( 528 )  

    With the continuous advancement of technical capabilities and the extensive penetration of application scenarios of Large Language Models (LLMs), the structure of social interaction is transitioning from the traditionally single interpersonal interaction to a multi-level system integrating interpersonal interaction, human-machine interaction, and machine-machine interaction. In this context, understanding how humans perceive and evaluate LLMs has become an important issue. This research systematically examines the perception patterns of LLMs by humans through three studies. Study 1 found that, consistent with how humans perceive other humans, humans primarily perceive LLMs through two dimensions: warmth and competence. However, in general contexts, unlike the warmth priority in human perception, humans prioritize competence when perceiving LLMs. Study 2 explored the priority effect of warmth and competence in predicting different attitudes. The results show that both warmth and competence positively predict humans’ willingness to continue using LLMs and their liking of LLMs, with competence having a stronger predictive effect on willingness to continue using, and warmth having a stronger predictive effect on liking. Study 3 further explored the differences in human perception of LLMs and others. The results show that humans’ warmth evaluations of LLMs do not differ significantly from those of humans, but their competence evaluations of LLMs are significantly higher than those of humans. This study provides a theoretical basis for understanding human perception of LLMs and offers a new perspective for the design optimization of artificial intelligence and the study of human-machine collaboration mechanisms.

    Figures and Tables | References | Related Articles | Metrics
    Employees adhere less to advice on moral behavior from artificial intelligence supervisors than human
    XU Liying, ZHAO Yijun, YU Feng
    2025, 57 (11):  2060-2082.  doi: 10.3724/SP.J.1041.2025.2060
    Abstract ( 83 )   HTML ( 7 )  
    PDF (205KB) ( 53 )  
    References | Related Articles | Metrics