ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

心理科学进展 ›› 2024, Vol. 32 ›› Issue (5): 790-799.doi: 10.3724/SP.J.1042.2024.00790

• 研究前沿 • 上一篇    下一篇

面孔吸引力判断中的跨通道整合过程

王羽凌, 陆晓伟, 武宗杰, 李国根, 张林()   

  1. 宁波大学心理学系暨研究所, 浙江 宁波 315211
  • 收稿日期:2023-09-07 出版日期:2024-05-15 发布日期:2024-03-05
  • 通讯作者: 张林 E-mail:zhanglin1@nbu.edu.cn

The cross-modal integration process in facial attractiveness judgments

WANG Yuling, LU Xiaowei, WU Zongjie, LI Guogen, ZHANG Lin()   

  1. Department and Institute of Psychology, Ningbo University, Ningbo 315211, China
  • Received:2023-09-07 Online:2024-05-15 Published:2024-03-05
  • Contact: ZHANG Lin E-mail:zhanglin1@nbu.edu.cn

摘要:

以往关于面孔吸引力判断的研究多关注视觉信息, 忽视了非视觉信息在其中的作用, 而现有研究已证实面孔吸引力判断中存在不同感官信息的相互作用, 是跨通道整合的。为此, 在以往研究的基础上, 综合面孔空间模型和贝叶斯因果推理模型, 推测在面孔吸引力判断的跨通道整合过程中, 当个体根据感官刺激和已有的标准面孔推断不同感官信息是来自同一张目标面孔时, 便自然将各种感官信息在大脑中进行整合, 形成统一的目标面孔, 进行吸引力的判断。未来可将面孔嵌入更广泛的环境中, 考察多种感官信息的跨通道整合, 并进一步探究跨通道整合的边界条件, 以及社会互动中的跨通道整合, 以构建更系统的面孔吸引力跨通道整合模型。

关键词: 面孔吸引力, 跨通道整合, 嗅觉, 听觉, 视觉

Abstract:

Facial attractiveness research has traditionally centered on visual cues, sidelining the contribution of non-visual information. This review, through a retrospective analysis, reveals that in the process of evaluating facial attractiveness, individuals not only depend on visual information but also consider auditory and olfactory cues. These diverse sensory inputs collectively participate in the judgment of facial attractiveness, as demonstrated. However, due to differences in the content and conveyance of auditory and olfactory stimuli, factors influencing their integration with visual facial cues may exhibit variations. In contrast to audio-visual integration, visual-olfactory integration may be more susceptible to the influence of familiarity.

Through a retrospective review, this study finds that cross-modal integration in facial attractiveness judgments largely aligns with general cross-modal integration processes, adhering to similar integration mechanisms. Sensory information and prior experiences play crucial roles in this process. Sensory inputs, acting as top-down stimuli, capture individual attention and consolidate diverse sensory information onto the target face. The standard face, formed based on perceptual experiences, serves as top-down prior knowledge, deepening connections between different sensory information and promoting integration. However, from a more nuanced perspective, the cross-modal integration process in facial attractiveness judgments also exhibits unique characteristics. Due to the inherently social and individual nature of facial attractiveness judgments, factors such as emotions, sensory thresholds, and familiarity exert significant influences on the cross-modal integration process of facial attractiveness judgments. However, the impact of these factors is limited in the general cross-modal integration processes.

In addition, this review integrates the Face-space Model and the Bayesian Causal Inference model, proposing a cross-modal integration process for facial attractiveness judgments. Facial attractiveness judgment is based on the deviation between the target face and a standard face. The formation of the standard face involves not only visual information but also other sensory modalities. Individuals, when exposed to multiple sensory inputs, naturally connect various information based on the standard face, achieving cross-modal integration. When individuals infer that different sensory cues originate from the same target face, they naturally integrate these cues in the brain, forming a unified perception of the target face for attractiveness judgment.

Based on existing research, this review suggests three future research directions. In the first aspect, current studies often focus on the pairwise integration of visual, olfactory, and auditory cues, neglecting the role of tactile cues. Future research should explore the integration of facial stimuli in a broader, multisensory environment, employing deep learning and machine learning techniques to analyze extensive multisensory data, aiming to construct a more comprehensive cross-modal integration process model. In the second aspect, concerning the cross-modal integration process in facial attractiveness judgments, current research has not unearthed reliable evidence for unconscious integration processing. Stimuli presented unconsciously do not necessarily reflect real-life situations. Additionally, the extent to which the visual system can guide and allocate attention based on unconscious perceptual cues and subsequent targets remains an unresolved question. Future research should pivot towards investigating the degree to which the integration process relies on conscious perception, rather than whether this process can occur in unconscious conditions.In the third aspect, considering the complexity and dynamism of cross-modal integration in facial attractiveness judgments, future research could employ EEG techniques to examine different stages of integration mechanisms during social interactions. Additionally, using more ecologically valid materials and environments could enhance our understanding of cross-modal integration in real-world social interactions, facilitating a more accurate and rapid comprehension of the external world and promoting social engagement.

In conclusion, this comprehensive review synthesizes existing knowledge and outlines promising avenues for future research in the realm of facial attractiveness and multisensory integration. By unraveling the intricate interplay of sensory modalities, this research aims to provide a deeper understanding of how individuals perceive and evaluate facial attractiveness, paving the way for advancements in facial attractiveness and cross-modal integration studies.

Key words: facial attractiveness, cross-modal integration, olfaction, audition, vision

中图分类号: