ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2022, Vol. 30 ›› Issue (10): 2143-2153.doi: 10.3724/SP.J.1042.2022.02143

• Conceptual Framework •     Next Articles

Micro-expression spotting method based on human attention mechanism

LI Jingting1, DONG Zizhao1, LIU Ye1,2, WANG Su-Jing1,2(), ZHUANG Dongzhe3()   

  1. 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing 100101, China
    2Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
    3Public Security Behavioral Science Laboratory, People's Public Security University of China, Beijing 100038, China
  • Received:2022-03-24 Online:2022-10-15 Published:2022-08-24
  • Contact: WANG Su-Jing,ZHUANG Dongzhe E-mail:wangsujing@psych.ac.cn;zdzfrued@126.com

Abstract:

Micro-expressions are facial movements that are extremely short and not easily perceived, often generated under high pressure. Micro-expressions can reveal the individual's hidden real emotions and are important non-verbal communication clues, widely used in lies detection and other fields. Due to the difficulty of eliciting, collecting, and labeling micro-expression samples, micro-expression-related research becomes a typical small-sample-size (SSS) problem. In order to enlighten the application of micro-expression analysis technology in complex real-life scenarios such as national security and clinical consultation, this study focuses on the SSS problem and proposes a micro-expression spotting method based on human attention mechanism with multi-branching self-supervised learning through the intersection of computer and psychology.

First, this study conducts an exploration related to attentional resources based on the cognitive mechanisms of psychological micro-expressions. A behavioral-experimental paradigm combining eye-movement techniques and a presentation-judgment paradigm with subthreshold emotion priming effects was used to examine the cognitive mechanisms of selective attention allocation in micro-expression recognition and to refine the distinct regions of interest in human recognition of micro-expressions. Thus, the model is effectively and directly enabled to acquire important micro-expression features from the input information. Then the relevant attention modules are further generated from multi-dimensions (time domain, spatial domain, and channel domain) by the deep learning network to improve the performance of the network in extracting micro-expression features with the limited sample size.

Second, this study proposes a multi-branching self-supervised learning method based on the human attention mechanism for micro-expression spotting. Training in many unlabeled video samples for the pre-text tasks enables the model to extract features from regions of interest of micro-expressions, including structural and detail features and video dynamic change patterns. Thus, the limitation caused by the SSS problem could be avoided.

Finally, the current data released for micro-expressions are video samples and do not include the corresponding depth information. This study will carry out a depth information-based micro-expression spotting method based on the first micro-expression database that includes image depth information being created by our research team. It enables self-supervised learning to learn the corresponding action patterns from the geometric information of the scene.

This research will achieve theoretical and technological breakthroughs in the field of automatic micro-expression spotting, improve the accuracy and reliability, and lay the foundation for the application of micro-expression spotting in realistic and complex scenarios.

Second, it can achieve the data augmentation of micro-expression samples by mining micro-expression clips in unlabeled videos. Thus, the micro-expression small sample problem could be solved, and the performance improvement of traditional supervised micro-expression spotting methods could be improved.

Key words: micro-expression spotting, small sample problem, human attention mechanism, self-supervised learning, depth information

CLC Number: