[1] |
Baldauf, D., & Desimone, R. (2014). Neural mechanisms of object-based attention. Science, 344(6182), 424-427.
doi: 10.1126/science.1247003
pmid: 24763592
|
[2] |
Ben, X., Ren, Y., Zhang, J., Wang, S.-J., Kpalma, K., Meng, W., & Liu, Y.-J. (2021). Video-based facial micro-expression analysis: A survey of datasets, features and algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3067464
|
[3] |
Cai, J., Xie, H., Li, J., & Li, S. (2020). Facial expression recognition with an attention network using a single depth image. In H. Yang, K. Pasupa, A. C.-S. Leung, J. T. Kwok, J. H. Chan, & I. King (Eds.), Neural Information Processing (pp. 222-231). Springer International Publishing.
|
[4] |
Danelakis, A., Theoharis, T., Pratikakis, I., & Perakis, P. (2016). An effective methodology for dynamic 3D facial expression retrieval. Pattern Recognition, 52, 174-185.
doi: 10.1016/j.patcog.2015.10.012
URL
|
[5] |
Davison, A., Merghani, W., Lansley, C., Ng, C. C., & Yap, M. H. (2018, May). Objective micro-facial movement detection using facs-based regions and baseline evaluation. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018) (pp. 642-649). IEEE.
|
[6] |
Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2018). SAMM: A spontaneous micro-facial movement dataset. IEEE Transactions on Affective Computing, 9(1), 116-129. https://doi.org/10.1109/TAFFC.2016.2573832
doi: 10.1109/TAFFC.2016.2573832
URL
|
[7] |
Ding, J., Tian, Z., Lyu, X., Wang, Q., Zou, B., & Xie, H. (2019, September). Real-time micro-expression detection in unlabeled long videos using optical flow and LSTM neural network. In International Conference on Computer Analysis of Images and Patterns (pp.622-634). Springer-Verlag.
|
[8] |
Doersch, C., Gupta, A., & Efros, A. A. (2015, December). Unsupervised visual representation learning by context prediction. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) (pp. 1422-1430). IEEE.
|
[9] |
Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion, 11(4), 860-865. https://doi.org/10.1037/a0022758
doi: 10.1037/a0022758
URL
pmid: 21859204
|
[10] |
Ekman, P. (2003). Emotions revealed. St. Martin’s Griffin, New York.
|
[11] |
Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88-106.
doi: 10.1080/00332747.1969.11023575
URL
|
[12] |
Fernando, B., Bilen, H., Gavves, E., & Gould, S. (2017, July). Self-supervised video representation learning with odd-one-out networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3636-3645). IEEE.
|
[13] |
Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., & Lu, H. (2019, June). Dual attention network for scene segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3146-3154). IEEE.
|
[14] |
Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. ArXiv Preprint ArXiv:1803. 07728.
|
[15] |
Haggard, E. A., & Isaacs, K. S. (1966). Micromomentary facial expressions as indicators of ego mechanisms in psychotherapy. In Methods of research in psychotherapy (pp. 154-165). Springer, Boston, MA.
|
[16] |
Jing, L., & Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11), 4037-4058.
doi: 10.1109/TPAMI.2020.2992393
URL
|
[17] |
Larsson, G., Maire, M., & Shakhnarovich, G. (2017). Colorization as a proxy task for visual understanding. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 6874-6883). IEEE.
|
[18] |
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
doi: 10.1038/nature14539
URL
|
[19] |
Li, H., Xiong, P., An, J., & Wang, L. (2018). Pyramid attention network for semantic segmentation. ArXiv Preprint ArXiv:1805.10180.
|
[20] |
Li, J., Soladie, C., & Seguier, R. (2020). Local temporal pattern and data augmentation for micro-expression spotting. IEEE Transactions on Affective Computing, pp. 1-1. https://doi.org/10.1109/TAFFC.2020.3023821
|
[21] |
Li, J., Soladie, C., Seguier, R., Wang, S. J., & Yap, M. H. (2019, May). Spotting micro-expressions on long videos sequences. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (pp. 1-5). IEEE.
|
[22] |
Li, X., Hong, X., Moilanen, A., Huang, X., Pfister, T., Zhao, G., & Pietikäinen, M. (2017). Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing, 9(4), 563-577.
doi: 10.1109/TAFFC.2017.2667642
URL
|
[23] |
Li, X., Liu, S., de Mello, S., Wang, X., Kautz, J., & Yang, M.-H. (2019). Joint-task self-supervised learning for temporal correspondence. Advances in Neural Information Processing Systems, 32.
|
[24] |
Li, Y., Huang, X., & Zhao, G. (2021). Micro-expression action unit detection with spatial and channel attention. Neurocomputing, 436, 221-231.
doi: 10.1016/j.neucom.2021.01.032
URL
|
[25] |
Liong, S.-T., See, J., Wong, K., Le Ngo, A. C., Oh, Y. H., & Phan, R. (2015, November). Automatic apex frame spotting in micro-expression database. In 2015 3rd IAPR Asian conference on pattern recognition (ACPR) (pp. 665-669). IEEE.
|
[26] |
Luo, C., Zhang, J., Yu, J., Chen, C. W., & Wang, S. (2019). Real-time head pose estimation and face modeling from a depth image. IEEE Transactions on Multimedia, 21(10), 2473-2481.
doi: 10.1109/TMM.2019.2903724
URL
|
[27] |
Ma, J., Zhang, H., & She, W. (2017, June). Research on robust face recognition based on depth image sets. In 2017 2nd International Conference on Image, Vision and Computing (ICIVC) (pp. 223-227). IEEE.
|
[28] |
Mnih, V., Heess, N., & Graves, A. (2014). Recurrent models of visual attention. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, & K. Q. Weinberger (Eds.), Advances in neural information processing systems (Vol. 27). Curran Associates, Inc.
|
[29] |
Moilanen, A., Zhao, G., & Pietikäinen, M. (2014, August). Spotting rapid facial movements from videos using appearance-based feature difference analysis. In 2014 22nd international conference on pattern recognition (pp. 1722-1727). IEEE.
|
[30] |
Owayjan, M., Kashour, A., Al Haddad, N., Fadel, M., & Al Souki, G. (2012, December). The design and development of a lie detection system using facial micro-expressions. In 2012 2nd international conference on advances in computational tools for engineering applications (ACTEA) (pp. 33-38). IEEE.
|
[31] |
Pan, H., Xie, L., & Wang, Z. (2020, November). Local bilinear convolutional neural network for spotting macro-and micro-expression intervals in long video sequences. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (pp. 749-753). IEEE.
|
[32] |
Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016, June). Context encoders: Feature learning by inpainting. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2536-2544). IEEE.
|
[33] |
Qu, F., Wang, S.-J., Yan, W.-J., Li, H., Wu, S., & Fu, X. (2018). CAS (ME)2: A database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4), 424-436.
doi: 10.1109/TAFFC.2017.2654440
URL
|
[34] |
See, J., Yap, M. H., Li, J., Hong, X., & Wang, S. J. (2019, May). MEGC 2019-the second facial micro-expressions grand challenge. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (pp. 1-5). IEEE.
|
[35] |
Tran, T.-K., Hong, X., & Zhao, G. (2017). Sliding window based micro-expression spotting:A benchmark. In J. Blanc-Talon, R. Penne, W. Philips, D. Popescu, & P. Scheunders (Eds.), Advanced Concepts for Intelligent Vision Systems (pp. 542-553). Springer International Publishing.
|
[36] |
Verburg, M., & Menkovski, V. (2019, May). Micro-expression detection in long videos using optical flow and recurrent neural networks. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (pp. 1-6). IEEE.
|
[37] |
Wang, S.-J., He, Y., Li, J., & Fu, X. (2021). MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos. IEEE Transactions on Image Processing, 30, 3956-3969. https://doi.org/10.1109/TIP.2021.3064258
doi: 10.1109/TIP.2021.3064258
URL
|
[38] |
Wang, S. J., Wu, S., & Fu, X. (2016, November). A main directional maximal difference analysis for spotting micro-expressions. In C.-S. Chen, J. Lu, & K.-K. Ma (Eds.), Computer Vision-ACCV 2016 Workshops. ACCV 2016. Lecture Notes in Computer Science (vol 10117, pp. 449-461). Springer, Cham.
|
[39] |
Wen, J., Yang, W., Wang, L., Wei, W., Tan, S., & Wu, Y. (2020, December). Cross-database micro expression recognition based on apex frame optical flow and multi-head self-attention. In International Symposium on Parallel Architectures, Algorithms and Programming (pp. 128-139). Springer, Singapore.
|
[40] |
Yan, W.-J., Li, X., Wang, S.-J., Zhao, G., Liu, Y.-J., Chen, Y.-H., & Fu, X. (2014). CASME II: An improved spontaneous micro-expression database and the baseline evaluation. PloS One, 9(1), e86041.
|
[41] |
Yan, W.-J., Wu, Q., Liu, Y.-J., Wang, S.-J., & Fu, X. (2013, April). CASME database: A dataset of spontaneous micro-expressions collected from neutralized faces. In 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG) (pp. 1-7). IEEE.
|