ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

心理科学进展 ›› 2026, Vol. 34 ›› Issue (1): 123-133.doi: 10.3724/SP.J.1042.2026.0123 cstr: 32111.14.2026.0123

• 研究前沿 • 上一篇    下一篇

算法介导下的情感趋同:生成式人工智能情感传染机制

武靖宇1, 金鑫2()   

  1. 1中国传媒大学新闻学院, 北京 100024
    2重庆师范大学新闻与传媒学院, 重庆 401331
  • 收稿日期:2025-03-21 出版日期:2026-01-15 发布日期:2025-11-10
  • 通讯作者: 金鑫, E-mail: 32499074@qq.com
  • 基金资助:
    省部级基金“多模态情感传播视域下信息失序风险识别与网络戾气治理研究”(24YJA86008);重庆市教委一般项目“数字时代图像新闻的情感传播与影响机制研究”(24SKGH062)

Algorithm-mediated emotional convergence: The emotional contagion mechanisms of artificial intelligence generated content

WU Jingyu1, JIN Xin2()   

  1. 1School of Journalism, Communication University of China, Beijing, 100024, China
    2School of Journalism and Communication, Chongqing Normal University, Chongqing, 401331, China
  • Received:2025-03-21 Online:2026-01-15 Published:2025-11-10

摘要:

本研究聚焦生成式人工智能(Artificial Intelligence Generated Content, AIGC)情感传染这一新兴课题, 系统探讨其与传统人际情感传染、数字情感传染的本质差异, 提出“扮演−调节”机制作为理论框架。研究发现AIGC情感传染具有交互主体性、知识依附性、非威胁性与去身份化、道德性这四大特性, 共同支撑AIGC情感传染的“扮演−调节”机制; AIGC通过算法模拟人类情感表达模式(即“扮演”), 同时根据用户实时反馈动态优化交互策略(即“调节”), 形成持续迭代的人机情感闭环。“扮演−调节”机制突破人类中心主义范式, 构建了跨主体情感理论、揭示了算法作为主动情感调节者的新角色, 已被应用于心理健康干预、传播学及教育激励等领域。AIGC情感传染研究拓展了情感传染理论外延, 为理解人机情感交互提供了新视角, 但也面临多学科整合难题、多模态情感测量的复杂性、跨文化适配障碍, 以及算法偏见导致的情感误导风险等挑战。

关键词: 生成式人工智能(AIGC), 情感传染, 人机交互

Abstract:

This study introduces and elaborates a novel theoretical framework, the “enactment-modulation” mechanism, to explain the unique process of emotional contagion mediated by Artificial Intelligence Generated Content (AIGC). Moving beyond traditional paradigms of emotional contagion, which are inherently rooted in human-to-human interaction, this research systematically delineates the fundamental distinctions of AIGC-driven contagion and establishes its core characteristics and operational logic.
The primary innovation of this work lies in its identification and analysis of four constitutive characteristics that collectively define and enable AIGC emotional contagion: Intersubjectivity, Knowledge Dependency, Non-threatening and De-identified Nature, and Moral Relevance.
First, Intersubjectivity refers to the phenomenon where users, interacting with an AIGC system that demonstrates high adaptability, logical coherence, and simulated emotional responsiveness, cognitively perceive it as a dialogic partner with reflective capabilities. This constructs a quasi-inter-subjective relational experience. Unlike human subjectivity grounded in self-awareness, AIGC's “inter-subjectivity” is a data-driven construct, emerging from statistical pattern learning across massive training datasets. This characteristic facilitates a shift in the human-machine relationship from a “subject-object” dynamic to a “subject-quasi-subject” collaboration, which is crucial for establishing the initial conditions for contagion.
Second, Knowledge Dependency signifies that the AIGC's capacity for emotional understanding and expression is entirely contingent upon its training data. It is a purely data-driven entity whose outputs are recombinations and reproductions of collective human experience. This dependency enables the AIGC to adapt its language style and responses to user needs, forming an empathic connection at the knowledge level. However, this strength is also a potential source of vulnerability, as it inherently carries the risk of replicating and amplifying societal biases present in the training data, leading to potential emotional misdirection.
Third, the Non-threatening and De-identified Nature of AIGC is a pivotal differentiator. As a non-human agent without genuine social identity, personal biases, or independent interests, the AIGC creates a safe interaction environment free from social evaluation pressure. This allows users to lower psychological defenses and express themselves more freely. Concurrently, “de-identification” means the emotional connection does not rely on pre-existing social identity labels (e.g., gender, status). The AIGC triggers emotional resonance directly through content and interaction, granting its contagion a broader applicability and potential to transcend cultural and social boundaries.
Fourth, Moral Relevance is engineered into the AIGC's core operation. Through sophisticated algorithmic design, such as the Emotion-Contagion Encoder (ECE) and Multi-task Rational Response Generation Decoder (MRRGD) frameworks, ethical rules and social values are embedded. This ensures the AIGC's emotional interactions align with mainstream social norms, often with a positivity bias. The system can identify emotional cues, interpret them within a contextual and commonsense framework, and generate responses that are not only appropriate but also ethically guided, aiming to soothe negative emotions and reinforce positive ones. This built-in morality is fundamental to establishing AIGC as a “safe emotional container” and a legitimate partner in moral communication.
These four characteristics are not isolated; they operate synergistically to form the proposed “enactment-modulation” mechanism, which is the core theoretical contribution of this paper. This mechanism describes a dynamic, algorithm-driven feedback loop. “Enactment” constitutes the AIGC's ability to simulate human emotional expression patterns. Leveraging its knowledge dependency and powered by large language models, it integrates emotional vocabulary, tonal features, and socio-cultural clues to generate realistic emotional responses, effectively “playing a role” of an empathetic entity. The user's perception of the AIGC's intersubjectivity makes them more receptive to this enactment.
“Modulation” represents the AIGC's capacity for dynamic adjustment. Guided by its embedded moral relevance and operating within the non-threatening environment it provides, the AIGC actively refines its interaction strategies based on real-time user feedback. It aims to guide the emotional trajectory of the conversation towards constructive and positive outcomes, such as alleviating anxiety. This process forms a continuous, iterative human-machine emotional feedback loop where the AIGC, through simulation and guidance rather than genuine feeling, actively shapes the user's emotional state.
The “enactment-modulation” mechanism has profound implications. Theoretically, it breaks from anthropocentric paradigms, positing a cross-subjective emotion theory where the algorithm transitions from a passive tool to an active, agentic node in emotional interaction. Practically, it establishes a design paradigm for humanized affective AI systems and provides a framework for analyzing ethical risks, such as algorithmic manipulation, emotional dependency, and social isolation. Its applications are already evident in mental health interventions (e.g., AI companions providing a safe space for self-disclosure), communication studies (e.g., using AI agents to simulate public opinion formation), and educational motivation (e.g., AI tutors using encouraging feedback to reduce learning anxiety).
Despite its potential, the study of AIGC emotional contagion faces significant challenges. Key among them are the complexities of multi-modal emotion measurement, where inconsistencies across text, voice, and visual outputs can undermine contagion; cross-cultural adaptation barriers, as current models often fail to adequately capture and replicate culturally specific emotional expression norms; and the persistent risk of emotional misdirection stemming from algorithmic biases. Future research must focus on developing unified multi-modal frameworks, building culturally nuanced emotional knowledge graphs, and creating sophisticated measurement tools, potentially integrating neuro-imaging techniques like fNIRS and controlled virtual testbeds, to objectively capture the dynamics of this novel form of algorithmic emotional convergence.

Key words: Artificial Intelligence Generated Content (AIGC), emotional contagion, human-AI interaction

中图分类号: