Expression-based grouping in multiple identity tracking
LEI Huanyu1; WEI Liuqing1; LYU Chuang1; ZHANG Xuemin1,2,3; YAN Xiaoqian1,4
(1 School of Psychology, Beijing Normal University, Beijing 100875, China)
(2 National Key Lab of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing 100875, China)
(3 Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, China)
(4 Department of Psychology, University of York, York YO10 5DD, UK)
The multiple-object tracking (MOT) task, proposed by Pylyshyn and Storm (1988), requires participants to simultaneously track the positions of several visual objects as they move among identical distractors. Yantis (1992) found that participants used perceptual grouping strategy when tracking multiple moving objects, indicating that moving objects with different identities could facilitate using of grouping strategy to help tracking. In this present study, we used facial expressions of emotions as object identities to investigate grouping strategies during multiple-identity tracking (MIT). With facial expressions of emotion playing an important role in daily life, understanding whether or not the processing of emotions would affect visual objects tracking is a topic of both theoretical and practical importance, compared with studies using physical properties as object identities.
There are two experiments in the present study, where the only difference between the two experiments was that there was no eyebrows in the schematic faces used in the second experiment, in order to investigate whether eyebrows affected facial expression processing. We recruited 29 (11 males) and 16 (7 males) undergraduates from the universities in Beijing in Experiment1 and Experiment2, respectively. All the participants gave their consent form and filled in the Self-rating Depression Scale and State-Trait-Anxiety Inventory prior to the experiments. In each trial, eight objects appeared on the screen as blank squares, and four of eight occurred with red frames outside for 1.5 seconds to indicate targets. After that, all the objects turned into expression pictures and started to move randomly and independently for 5 to 6 seconds, and then turned back into blank objects after stopped. The participants’ tasks were first to select four targets and then to report the facial expression of each of the four targets they selected. There are three conditions: (1) grouping (Target Grouping, TG); (2) pairing (Target-Distractor Grouping, TDG); and (3) homogenous, in which the identity of all the objects was always the same expression (positive, negative, or neutral). TG condition included subcategories of pairing of identities. There were four conditions, each depicting one of the following: (1) positive targets, negative distractors; (2) positive targets, neutral distractors; (3) negative targets, positive distractors; and (4) negative targets, neutral distractors. While three subcategories were included in the TDG condition: (1) positive and neutral targets (and distractors), (2) positive and negative targets (and distractors), (3) negative and neutral targets (and distractors). We then did relevant analyses to answer the following three questions: 1) did grouping strategy improve tracking performance compared to homogenous condition? 2) did pairing strategy affect the overall tracking performance? 3) did eyebrows of the face images affect facial expression processing which further affect tracking performance?
We found similar results in the two experiments: (1) grouping improved tracking performance significantly, compared with homogenous condition; (2) targets with negative expression improved tracking performance significantly, compared with either positive targets or homogenous conditions, indicating an attention bias to negative expression; (3) shared identities between targets and distractors impaired tracking performance, compared with homogenous condition; (4) absence of eyebrows in facial images did not affect processing of negative expressions.
In conclusion, we examined grouping strategy in MIT using facial expression as object identities. Our results showed that targets with negative expressions improved tracking performance significantly compared with positive and neural expressions, indicating an attention bias to negative attention. People can not only effectively use location and physical properties information of objects during multiple objects tracking found in previous studies, but also more ecological facial expression information. Our study also provides a new way of investigating perception of facial expressions of emotions in dynamic scenes.