ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2026, Vol. 34 ›› Issue (1): 123-133.doi: 10.3724/SP.J.1042.2026.0123

• Regular Articles • Previous Articles     Next Articles

Algorithm-mediated emotional convergence: The emotional contagion mechanisms of artificial intelligence generated content

WU Jingyu1, JIN Xin2()   

  1. 1School of Journalism, Communication University of China, Beijing, 100024, China
    2School of Journalism and Communication, Chongqing Normal University, Chongqing, 401331, China
  • Received:2025-03-21 Online:2026-01-15 Published:2025-11-10
  • Contact: JIN Xin E-mail:32499074@qq.com

Abstract:

This study introduces and elaborates a novel theoretical framework, the “enactment-modulation” mechanism, to explain the unique process of emotional contagion mediated by Artificial Intelligence Generated Content (AIGC). Moving beyond traditional paradigms of emotional contagion, which are inherently rooted in human-to-human interaction, this research systematically delineates the fundamental distinctions of AIGC-driven contagion and establishes its core characteristics and operational logic.
The primary innovation of this work lies in its identification and analysis of four constitutive characteristics that collectively define and enable AIGC emotional contagion: Intersubjectivity, Knowledge Dependency, Non-threatening and De-identified Nature, and Moral Relevance.
First, Intersubjectivity refers to the phenomenon where users, interacting with an AIGC system that demonstrates high adaptability, logical coherence, and simulated emotional responsiveness, cognitively perceive it as a dialogic partner with reflective capabilities. This constructs a quasi-inter-subjective relational experience. Unlike human subjectivity grounded in self-awareness, AIGC's “inter-subjectivity” is a data-driven construct, emerging from statistical pattern learning across massive training datasets. This characteristic facilitates a shift in the human-machine relationship from a “subject-object” dynamic to a “subject-quasi-subject” collaboration, which is crucial for establishing the initial conditions for contagion.
Second, Knowledge Dependency signifies that the AIGC's capacity for emotional understanding and expression is entirely contingent upon its training data. It is a purely data-driven entity whose outputs are recombinations and reproductions of collective human experience. This dependency enables the AIGC to adapt its language style and responses to user needs, forming an empathic connection at the knowledge level. However, this strength is also a potential source of vulnerability, as it inherently carries the risk of replicating and amplifying societal biases present in the training data, leading to potential emotional misdirection.
Third, the Non-threatening and De-identified Nature of AIGC is a pivotal differentiator. As a non-human agent without genuine social identity, personal biases, or independent interests, the AIGC creates a safe interaction environment free from social evaluation pressure. This allows users to lower psychological defenses and express themselves more freely. Concurrently, “de-identification” means the emotional connection does not rely on pre-existing social identity labels (e.g., gender, status). The AIGC triggers emotional resonance directly through content and interaction, granting its contagion a broader applicability and potential to transcend cultural and social boundaries.
Fourth, Moral Relevance is engineered into the AIGC's core operation. Through sophisticated algorithmic design, such as the Emotion-Contagion Encoder (ECE) and Multi-task Rational Response Generation Decoder (MRRGD) frameworks, ethical rules and social values are embedded. This ensures the AIGC's emotional interactions align with mainstream social norms, often with a positivity bias. The system can identify emotional cues, interpret them within a contextual and commonsense framework, and generate responses that are not only appropriate but also ethically guided, aiming to soothe negative emotions and reinforce positive ones. This built-in morality is fundamental to establishing AIGC as a “safe emotional container” and a legitimate partner in moral communication.
These four characteristics are not isolated; they operate synergistically to form the proposed “enactment-modulation” mechanism, which is the core theoretical contribution of this paper. This mechanism describes a dynamic, algorithm-driven feedback loop. “Enactment” constitutes the AIGC's ability to simulate human emotional expression patterns. Leveraging its knowledge dependency and powered by large language models, it integrates emotional vocabulary, tonal features, and socio-cultural clues to generate realistic emotional responses, effectively “playing a role” of an empathetic entity. The user's perception of the AIGC's intersubjectivity makes them more receptive to this enactment.
“Modulation” represents the AIGC's capacity for dynamic adjustment. Guided by its embedded moral relevance and operating within the non-threatening environment it provides, the AIGC actively refines its interaction strategies based on real-time user feedback. It aims to guide the emotional trajectory of the conversation towards constructive and positive outcomes, such as alleviating anxiety. This process forms a continuous, iterative human-machine emotional feedback loop where the AIGC, through simulation and guidance rather than genuine feeling, actively shapes the user's emotional state.
The “enactment-modulation” mechanism has profound implications. Theoretically, it breaks from anthropocentric paradigms, positing a cross-subjective emotion theory where the algorithm transitions from a passive tool to an active, agentic node in emotional interaction. Practically, it establishes a design paradigm for humanized affective AI systems and provides a framework for analyzing ethical risks, such as algorithmic manipulation, emotional dependency, and social isolation. Its applications are already evident in mental health interventions (e.g., AI companions providing a safe space for self-disclosure), communication studies (e.g., using AI agents to simulate public opinion formation), and educational motivation (e.g., AI tutors using encouraging feedback to reduce learning anxiety).
Despite its potential, the study of AIGC emotional contagion faces significant challenges. Key among them are the complexities of multi-modal emotion measurement, where inconsistencies across text, voice, and visual outputs can undermine contagion; cross-cultural adaptation barriers, as current models often fail to adequately capture and replicate culturally specific emotional expression norms; and the persistent risk of emotional misdirection stemming from algorithmic biases. Future research must focus on developing unified multi-modal frameworks, building culturally nuanced emotional knowledge graphs, and creating sophisticated measurement tools, potentially integrating neuro-imaging techniques like fNIRS and controlled virtual testbeds, to objectively capture the dynamics of this novel form of algorithmic emotional convergence.

Key words: Artificial Intelligence Generated Content (AIGC), emotional contagion, human-AI interaction

CLC Number: