ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2018, Vol. 50 ›› Issue (5): 483-493.doi: 10.3724/SP.J.1041.2018.00483

Previous Articles     Next Articles

 Visual and auditory verbal working memory affects visual attention in the semantic matching

 LI Biqin1; LI Ling1; WANG Aijun2; ZHANG Ming2   

  1.  (1 Lab of Psychology and Cognition Science of Jiangxi, School of Psychology, Jiangxi Normal University, Nanchang 330022, China) (2 Department of Psychology, Research Center for Psychological and Behavioral Sciences, Soochow University, Suzhou 215123, China)
  • Received:2017-08-17 Published:2018-05-25 Online:2018-03-31
  • Contact: WANG Aijun, E-mail: ajwang@suda.edu.cn; LI Biqin, E-mail: cyrill_@163.com E-mail: E-mail: ajwang@suda.edu.cn; E-mail: cyrill_@163.com
  • Supported by:
     

Abstract:  Previous studies have showed that information held in working memory (WM) can guide or capture attention during visual search in a relatively automatic way, even when it is irrelevant and detrimental to current task performance. Some researchers have proposed that the semantic match between WM contents and distractors could also capture attention, as well as the perceptual match. As we known, the verbal WM contents can be stored in the visual and auditory inputs. Even though the automatic influence of visual verbal WM on visual attention have been demonstrated, it remains unknown whether the auditory verbal WM could automatically capture attention. Therefore, it is necessary to investigate the attention guidance by the verbal WM contents. The present study included two experiments to explore the questions presented above. In Experiment 1, the memory item was a verbal Chinese character that presented visually, denoting a color, such as “红”. The participants were instructed to remember the word and avoid the potential distractors. Subsequently, they completed a visual search task, in order to test whether the verbal WM contents could guide attention. The results showed that, compared with the control condition, the visual search RTs were longer in the perceptual-matching and semantic-matching conditions, and the same as the RTs in the fastest trials. With the memory item that never matched the target in the search task, we suggested that the verbal WM contents that were presented visually (vis-VWM) could capture attention at perceptual and semantic levels automatically. In Experiment 2, the memory item was presented by the auditory inputs via the headphones (audi-VWM). The results showed that the visual search RTs in the semantic-matching condition were shorter than RTs in the control and perceptual-matching conditions, and there was no significant difference in the other conditions. Meanwhile, compared the shortest RTs across the different conditions, the results showed that the RTs in the semantic-matching condition were longer than in the control condition, which suggested that the aurally presented verbal WM could capture attention at the semantic level in the fastest response trials. In conclusion, the present study demonstrated that the verbal working memory that presented visually could automatically capture attention at both perceptual and semantic levels, and also verified the hypothesis that the attention capture effect would occur at the early stages of attention. However, the contents of verbal working memory would always capture attention at the earlier processing stage and could only be rejected at the later processing stage when the contents were aurally presented. Due to the modality specificity, attentional resources would be distributed to different sensory modalities. The memory-matching distractors could be rejected at the later processing stage because of there were the sufficient cognitive resources.

Key words: attention capture, verbal working memory, visual search, semantic matching

CLC Number: