心理科学进展 ›› 2025, Vol. 33 ›› Issue (11): 1837-1853.doi: 10.3724/SP.J.1042.2025.1837 cstr: 32111.14.2025.1837
• 研究构想 • 下一篇
收稿日期:2025-03-06
出版日期:2025-11-15
发布日期:2025-09-19
通讯作者:
王甦菁, E-mail: wangsujing@psych.ac.cn基金资助:
LI Jingting, ZHAO Lin, DONG Zizhao, WANG Su-Jing(
)
Received:2025-03-06
Online:2025-11-15
Published:2025-09-19
摘要:
微表情指个体试图抑制真实情绪时无意识流露的短暂面部动作, 其非侵入性特征在国家安全、公共安全等领域具有重要应用价值。针对实际场景中存在的生态效度不足、运动干扰及数据隐私等问题, 研究基于生理学与行为心理学机制, 构建高生态效度的微表情诱发范式, 开发面周肌电信号辅助编码系统, 并建立多场景动态微表情数据库。通过设计头部运动与口型变化干扰消除模块, 结合自监督学习框架解决小样本识别问题, 同时利用异步联邦学习实现跨场景隐私保护下的模型部署。该研究通过心理学与计算机科学的交叉融合, 提出兼顾理论机制与实际应用的微表情分析框架, 为多领域应用提供技术支持。
中图分类号:
李婧婷, 赵琳, 东子朝, 王甦菁. (2025). 面向实际应用的微表情分析:从数据采集到智能部署. 心理科学进展 , 33(11), 1837-1853.
LI Jingting, ZHAO Lin, DONG Zizhao, WANG Su-Jing. (2025). Micro-expression analysis for practical applications: From data acquisition to intelligent deployment. Advances in Psychological Science, 33(11), 1837-1853.
| [1] | Ang, L. B. P., Belen, E. F., Bernardo, R. A., Boongaling, E. R., Briones, G. H., & Coronel, J. B. (2004). Facial expression recognition through pattern analysis of facial muscle movements utilizing electromyogram sensors. 2004 IEEE Region 10 Conference TENCON 2004., C, Vol. 3. (pp.600-603). Chiang Mai, Thailand. https://doi.org/10.1109/TENCON.2004.1414843 |
| [2] | Ben, X., Ren, Y., Zhang, J., Wang, S.-J., Kpalma, K., Meng, W., & Liu, Y.-J. (2022). Video-based facial micro-expression analysis: A survey of datasets, features and algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 5826-5846. https://doi.org/10.1109/TPAMI.2021.3067464 |
| [3] | Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., Seth, K. (2017). Practical secure aggregation for privacy-preserving machine learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp.1175-1191). Dallas, TX, USA. https://doi.org/10.1145/3133956.3133982 |
| [4] | Chen, Y., Yang, Z., & Wang, J. (2015). Eyebrow emotional expression recognition using surface EMG signals. Neurocomputing, 168, 871-879. https://doi.org/10.1016/j.neucom.2015.05.037 |
| [5] | Darwin, C. (1872). The descent of man, and selection in relation to sex (Vol. 2). D. Appleton. |
| [6] | Davison, A., Merghani, W., Lansley, C., Ng, C.-C., & Yap, M. H. (2018). Objective micro-facial movement detection using FACS-based regions and baseline evaluation. 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), (pp.642-649). https://doi.org/10.1109/FG.2018.00101 |
| [7] | Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2016). Samm: A spontaneous micro-facial movement dataset. IEEE Transactions on Affective Computing, 9(1), 116-129. |
| [8] | Davison, A. K., Li, J., Yap, M. H., See, J., Cheng, W.-H., Li, X., Hong, X., & Wang, S.-J. (2023). MEGC2023: ACM multimedia 2023 ME grand challenge. Proceedings of the 31st ACM International Conference on Multimedia, (pp.9625-9629). https://doi.org/10.1145/3581783.3612833 |
| [9] |
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88-106. https://doi.org/10.1080/00332747.1969.11023575
doi: 10.1080/00332747.1969.11023575 URL pmid: 5779090 |
| [10] | Frank, M., Herbasz, M., Sinuk, K., Keller, A., Kurylo, A., & Nolan, C. (2009). I see how you feel:Training laypeople and professionals to recognize fleeting emotions. The Annual Meeting of the International Communication Association. Sheraton New York, New York City (pp. 1-35). |
| [11] | Gruebler, A., & Suzuki, K. (2014). Design of a wearable device for reading positive expressions from facial EMG signals. IEEE Transactions on Affective Computing, 5(3), 227-237. https://doi.org/10.1109/TAFFC.2014.2313557 |
| [12] | Hamedi, M., Salleh, S.-H., Astaraki, M., & Noor, A. M. (2013). EMG-based facial gesture recognition through versatile elliptic basis function neural network. BioMedical Engineering OnLine, 12(1), 73. https://doi.org/10.1186/1475-925X-12-73 |
| [13] | Huang, X., Wang, S.-J., Liu, X., Zhao, G., Feng, X., & Pietikäinen, M. (2019). Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition. IEEE Transactions on Affective Computing, 10(1), 32-47. https://doi.org/10.1109/TAFFC.2017.2713359 |
| [14] | Husák, P., Cech, J., & Matas, J. (2017). Spotting facial micro-expressions “in the wild”. In 22nd Computer Vision Winter Workshop (Retz) (pp.1-9). http://cmp.felk.cvut.cz/-cechj/ME/ |
| [15] | Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2017). Federated learning: Strategies for improving communication efficiency (No. arXiv:1610.05492). arXiv. https://doi.org/10.48550/arXiv.1610.05492 |
| [16] | Li, J., Dong, Z., Lu, S., Wang, S.-J., Yan, W.-J., Ma, Y., Liu, Y., Huang, C., & Fu, X. (2023). CAS(ME)3: A third generation facial spontaneous micro-expression database with depth information and high ecological validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3), 2782-2800. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2022.3174895 |
| [17] | Li, J., Soladié, C., & Séguier, R. (2023). Local temporal pattern and data augmentation for spotting micro- expressions. IEEE Transactions on Affective Computing, 14(1), 811-822. https://doi.org/10.1109/TAFFC.2020.3023821 |
| [18] | LI, J., Wang, S.-J., Yap, M. H., See, J., Hong, X., & Li, X. (2020). MEGC2020—The third facial micro-expression grand challenge. 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), 777-780. https://doi.org/10.1109/FG47880.2020.00035 |
| [19] | Li, J., Yap, M. H., Cheng, W.-H., See, J., Hong, X., Li, X.,... Dong, Z. (2022). MEGC2022: ACM multimedia 2022 micro- expression grand challenge. Proceedings of the 30th ACM International Conference on Multimedia (pp.7170-7174). https://doi.org/10.1145/3503161.3551601 |
| [20] | Li, J., Yap, M. H., Cheng, W.-H., See, J., Hong, X., Li, X., & Wang, S.-J. (2021). FME’21: 1st workshop on facial micro-expression: advanced techniques for facial expressions generation and spotting. Proceedings of the 29th ACM International Conference on Multimedia (pp.5700-5701). https://doi.org/10.1145/3474085.3478579 |
| [21] | Li, X., Cheng, S., Li, Y., Behzad, M., Shen, J., Zafeiriou, S., Pantic, M., & Zhao, G. (2022). 4DME: A spontaneous 4D micro- expression dataset with multimodalities. IEEE Transactions on Affective Computing, 14(4), 3031-3047. |
| [22] | Li, X., Hong, X., Moilanen, A., Huang, X., Pfister, T., Zhao, G., & Pietikäinen, M. (2018). Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing, 9(4), 563-577. https://doi.org/10.1109/TAFFC.2017.2667642 |
| [23] | Li, X., Pfister, T., Huang, X., Zhao, G., & Pietikäinen, M. (2013). A spontaneous micro-expression database: Inducement, collection and baseline. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (pp.1-6). https://doi.org/10.1109/FG.2013.6553717 |
| [24] | Lian, X., Zhang, W., Zhang, C., & Liu, J. (2018). Asynchronous decentralized parallel stochastic gradient descent. Proceedings of the 35th International Conference on Machine Learning (pp. 3043-3052). https://proceedings.mlr.press/v80/lian18a.html |
| [25] | Liu, Y.-J., Zhang, J.-K., Yan, W.-J., Wang, S.-J., Zhao, G., & Fu, X. (2016). A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Transactions on Affective Computing, 7(4), 299-310. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2015.2485205 |
| [26] | Lu, R., Zhang, W., Li, Q., He, H., Zhong, X., Yang, H., Alazab, M. (2024). Adaptive asynchronous federated learning. Future Generation Computer Systems, 152, 193-206. https://doi.org/10.1016/j.future.2023.11.001 |
| [27] | Lu, S., Li, J., Wang, Y., Dong, Z., Wang, S.-J., & Fu, X. (2022). A more objective quantification of micro-expression intensity through facial electromyography. Proceedings of the 2nd Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis (pp. 11-17). https://doi.org/10.1145/3552465.3555038 |
| [28] | Mansour, Y., Mohri, M., Ro, J., & Suresh, A. T. (2020). Three approaches for personalization with applications to federated learning (No. arXiv:2002.10619). arXiv. https://doi.org/10.48550/arXiv.2002.10619 |
| [29] | Mao, Q., Zhou, L., Zheng, W., Shao, X., & Huang, X. (2022). Objective class-based micro-expression recognition under partial occlusion via region-inspired relation reasoning network. IEEE Transactions on Affective Computing, 13(4), 1998-2016. https://doi.org/10.1109/TAFFC.2022.3197785 |
| [30] | Moilanen, A., Zhao, G., & Pietikäinen, M. (2014). Spotting rapid facial movements from videos using appearance- based feature difference analysis. 2014 22nd International Conference on Pattern Recognition (pp.1722-1727). https://doi.org/10.1109/ICPR.2014.303 |
| [31] | Pan, H., Xie, L., & Wang, Z. (2022). Spatio-temporal convolutional emotional attention network for spotting macro- and micro-expression intervals in long video sequences. Pattern Recognition Letters, 162, 89-96. https://doi.org/10.1016/j.patrec.2022.09.008 |
| [32] | Perusquía-Hernández, M., Dollack, F., Tan, C. K., Namba, S., Ayabe-Kanamura, S., & Suzuki, K. (2021). Smile action unit detection from distal wearable electromyography and computer vision. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) (pp.1-8). https://doi.org/10.1109/FG52635.2021.9667047 |
| [33] | Polikovsky, S., Kameda, Y., & Ohta, Y. (2009). Facial micro-expressions recognition using high speed camera and 3D-gradient descriptor. 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009), P16/1-P16/6. https://doi.org/10.1049/ic.2009.0244 |
| [34] |
Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37-43.
doi: 10.1038/s41591-018-0272-7 pmid: 30617331 |
| [35] | Qu, F., Wang, S.-J., Yan, W.-J., Li, H., Wu, S., & Fu, X. (2018). CAS (ME)2: A database for spontaneous macro- expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4), 424-436. https://doi.org/10.1109/TAFFC.2017.2654440 |
| [36] |
Rinn, W. E. (1984). The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1), 52-77. https://doi.org/10.1037/0033-2909.95.1.52
URL pmid: 6242437 |
| [37] | Sato, W., & Kochiyama, T. (2023). Crosstalk in facial EMG and its reduction using ICA. Sensors, 23(5), 2720. https://doi.org/10.3390/s23052720 |
| [38] | Sato, W., Murata, K., Uraoka, Y., Shibata, K., Yoshikawa, S., & Furuta, M. (2021). Emotional valence sensing using a wearable facial EMG device. Scientific Reports, 11(1), 5757. |
| [39] | Schultz, I., & Pruzinec, M. (2010). Facial expression recognition using surface electromyography [Unpublished doctoral dissertation]. Karlruhe Institute of Technology. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=7370f7109318a0ed91ae8a87371bb01d774e696e |
| [40] | See, J., Yap, M. H., Li, J., Hong, X., & Wang, S.-J. (2019). MEGC 2019 - the second facial micro-expressions grand challenge. 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (pp.1-5). https://doi.org/10.1109/FG.2019.8756611 |
| [41] | Shreve, M., Godavarthy, S., Goldgof, D., & Sarkar, S. (2011). Macro- and micro-expression spotting in long videos using spatio-temporal strain. 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG) (pp.51-56). https://doi.org/10.1109/FG.2011.5771451 |
| [42] | Wang, S.-J., He, Y., Li, J., & Fu, X. (2021). MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos. IEEE Transactions on Image Processing, 30, 3956-3969. https://doi.org/10.1109/TIP.2021.3064258 |
| [43] | Wang, S.-J., Yan, W.-J., Li, X., Zhao, G., Zhou, C.-G., Fu, X., Yang, M., & Tao, J. (2015). Micro-expression recognition using color spaces. IEEE Transactions on Image Processing, 24(12), 6034-6047. https://doi.org/10.1109/TIP.2015.2496314 |
| [44] | Xia, B., Wang, W., Wang, S., & Chen, E. (2020). Learning from macro-expression: A micro-expression recognition framework. Proceedings of the 28th ACM International Conference on Multimedia (pp.2936-2944). https://doi.org/10.1145/3394171.3413774 |
| [45] | Xie, T., Sun, G., Sun, H., Lin, Q., & Ben, X. (2022). Decoupling facial motion features and identity features for micro- expression recognition. PeerJ Computer Science, 8, e1140. |
| [46] | Xu, K., Chen, K., Sun, L., Lian, Z., Liu, B., Chen, G., Sun, H., Xu, M., & Tao, J. (2023). Integrating VideoMAE based model and optical flow for micro- and macro- expression spotting. Proceedings of the 31st ACM International Conference on Multimedia (pp.9576-9580). https://doi.org/10.1145/3581783.3612868 |
| [47] | Yan, W.-J., Li, X., Wang, S.-J., Zhao, G., Liu, Y.-J., Chen, Y.-H., & Fu, X. (2014). CASME II: An improved spontaneous micro-expression database and the baseline evaluation. PLOS ONE, 9(1), e86041. https://doi.org/10.1371/journal.pone.0086041 |
| [48] | Yan, W.-J., Wu, Q., Liu, Y.-J., Wang, S.-J., & Fu, X. (2013). CASME database: A dataset of spontaneous micro- expressions collected from neutralized faces. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (pp.1-7). https://doi.org/10.1109/FG.2013.6553799 |
| [49] | Yap, C. H., Yap, M. H., Davison, A., Kendrick, C., Li, J., Wang, S.-J., & Cunningham, R. (2022). 3D-CNN for facial micro- and macro-expression spotting on long video sequences using temporal oriented reference frame. Proceedings of the 30th ACM International Conference on Multimedia (pp.7016-7020). https://doi.org/10.1145/3503161.3551570 |
| [50] | Yin, S., Wu, S., Xu, T., Liu, S., Zhao, S., & Chen, E. (2023). AU-aware graph convolutional network for macroand micro-expression spotting. 2023 IEEE International Conference on Multimedia and Expo (ICME) (pp.228-233). https://doi.org/10.1109/ICME55011.2023.00047 |
| [51] | Yu, W.-W., Jiang, J., Yang, K.-F., Yan, H.-M., & Li, Y.-J. (2024). LGSNet: A two-stream network for micro- and macro-expression spotting with background modeling. IEEE Transactions on Affective Computing, 15(1), 223-240. https://doi.org/10.1109/TAFFC.2023.3266808 |
| [52] | Zhang, J., Huang, S., Li, J., Wang, Y., Dong, Z., & Wang, S.-J. (2023). A perifacial EMG acquisition system for facial-muscle-movement recognition. Sensors, 23(21), Article 21. https://doi.org/10.3390/s23218758 |
| [53] | Zhang, L. (2022). Animation expression control based on facial region division. Scientific Programming, 2022(1), 5800099. https://doi.org/10.1155/2022/5800099 |
| [54] | Zhang, L., Hong, X., Arandjelović, O., & Zhao, G. (2022). Short and long range relation based spatio-temporal transformer for micro-expression recognition. IEEE Transactions on Affective Computing, 13(4), 1973-1985. https://doi.org/10.1109/TAFFC.2022.3213509 |
| [55] | Zhang, L.-W., Li, J., Wang, S.-J., Duan, X.-H., Yan, W.-J., Xie, H.-Y., & Huang, S.-C. (2020). Spatio-temporal fusion for macro- and micro-expression spotting in long video sequences. 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (pp.734-741). https://doi.org/10.1109/FG47880.2020.00037 |
| [56] | Zhang, Y., Liu, D., Duan, M., Li, L., Chen, X., Ren, A., Tan, Y., & Wang, C. (2023). FedMDS: An efficient model discrepancy-aware semi-asynchronous clustered federated learning framework. IEEE Transactions on Parallel and Distributed Systems, 34(3), 1007-1019. https://doi.org/10.1109/TPDS.2023.3237752 |
| [57] | Zhang, Y., Wang, H., Xu, Y., Mao, X., Xu, T., Zhao, S., & Chen, E. (2023). Adaptive graph attention network with temporal fusion for micro-expressions recognition. 2023 IEEE International Conference on Multimedia and Expo (ICME) (pp.1391-1396). https://doi.org/10.1109/ICME55011.2023.00241 |
| [58] | Zhao, S., Tang, H., Mao, X., Liu, S., Zhang, Y., Wang, H., Xu, T., & Chen, E. (2024). DFME: A new benchmark for dynamic facial micro-expression recognition. IEEE Transactions on Affective Computing, 15(3), 1371-1386. https://doi.org/10.1109/TAFFC.2023.3341918 |
| [59] | Zhu, J., Zong, Y., Chang, H., Xiao, Y., & Zhao, L. (2022). A sparse-based transformer network with associated spatiotemporal feature for micro-expression recognition. IEEE Signal Processing Letters, 29, 2073-2077. https://doi.org/10.1109/LSP.2022.3211200 |
| [1] | 杜传晨, 郑远霞, 郭倩倩, 刘国雄. 大语言模型的人工心理理论:证据、界定与挑战[J]. 心理科学进展, 2025, 33(12): 2027-2042. |
| [2] | 陈丽君, 靳悦鑫, 曾涵菡, 蒋销柳. 孤独症谱系障碍儿童语音情绪识别的障碍:韵律、语义还是整合困难?——基于三水平元分析的探究[J]. 心理科学进展, 2025, 33(12): 2083-2104. |
| [3] | 陈彦, 田雪, 骆方. 面向空间导航能力的虚拟现实测验设计[J]. 心理科学进展, 2025, 33(12): 2138-2155. |
| [4] | 宋庆一, 蒋晓鸣. 话轮转换中的时间预期[J]. 心理科学进展, 2025, 33(12): 2168-2181. |
| [5] | 靳帅, 刘思佳, 李爽, 刘志远, 郭秀艳. 后悔情绪及其调节[J]. 心理科学进展, 2025, 33(12): 2182-2195. |
| [6] | 刘凯航, 朴忠淑, 田英, 王丽岩, 王洪彪. 从动作模仿到预测加工:运动感染的动态神经机制与实践应用图景[J]. 心理科学进展, 2025, 33(11): 1942-1956. |
| [7] | 周倩伊, 蔡亚琦, 张亚. 大语言模型的共情模拟:评估、提升与挑战[J]. 心理科学进展, 2025, 33(10): 1783-1793. |
| [8] | 黄建平, 陈纯纯, 刘梦颖. 多感官线索助推健康价值决策的计算和神经机制[J]. 心理科学进展, 2025, 33(9): 1457-1471. |
| [9] | 陈艺林, 谭青松, 龚梦园. 基于特征关系的注意选择机制[J]. 心理科学进展, 2025, 33(9): 1592-1603. |
| [10] | 张满豪, 周炜, 陈朝阳, 朱怡, 程亚华. 汉语声调意识与儿童阅读能力的关系[J]. 心理科学进展, 2025, 33(9): 1604-1616. |
| [11] | 彭玉佳, 王愉茜, 鞠芊芊, 刘峰, 徐佳. 贝叶斯框架下社交焦虑的社会认知特性[J]. 心理科学进展, 2025, 33(8): 1267-1274. |
| [12] | 隋雪, 安禹思, 许艺楠, 李雨桐. 快速阅读的眼动特征、认知特点及神经机制[J]. 心理科学进展, 2025, 33(8): 1358-1366. |
| [13] | 杨童舒, 黄艳利, 谢久书. 孤独症儿童跨情境词汇学习障碍的认知机制[J]. 心理科学进展, 2025, 33(8): 1367-1378. |
| [14] | 余凌峰, 张婕, 明先超, 雷怡. 无意识恐惧及其神经机制[J]. 心理科学进展, 2025, 33(7): 1234-1245. |
| [15] | 李莹, 翟一惠, 郝守彬, 戴雅杏, 马晓博, 李甜甜, 王悦. 二语具身效应及其影响因素:基于元分析的证据[J]. 心理科学进展, 2025, 33(7): 1221-1233. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||