ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2020, Vol. 52 ›› Issue (2): 139-148.doi: 10.3724/SP.J.1041.2020.00139

• Reports of Empirical Studies • Previous Articles     Next Articles

Comparing the attentional boost effect between classified learning and mixed learning

MENG Yingfang(),YE Xiumin,MA Huijiao   

  1. School of Psychology, Fujian Normal University, Fuzhou 350117, China
  • Received:2019-03-25 Published:2020-02-25 Online:2019-12-24
  • Contact: Yingfang MENG E-mail:175695016@qq.com

Abstract:

Stimuli presented with interference of the nature of targets detection are later recognized more accurately than that of distracted rejection, an unusual effect labeled the attentional boost effect (ABE). Spataro, Mulligan, Gabrielli and Rossi-Arnaud (2017) proposed the item-specific account, arguing that target detection mainly facilitates the processing of item-specific information rather than relational information. The item-specific account seems to have a larger scope of application. However, Spataro et al. (2017) proposed this account mainly based on the different degrees to which test tasks depended on item-specific and relational information. As a result, we propose a question: if target detection mainly promotes the item-specific information of the background stimulus, when the background stimulus mainly depends on the processing of relational information, will the promoting effect of target detection be reduced or even disappear? The discussion of this issue could provide more direct evidence for the item-specific account of the ABE. In the present study, mixed learning and classified learning methods were used to process the item-specific information and relational information of background stimuli. In general, pictures and words contain different perceptual information; the memory of picture preferentially utilizes image representations, while the memory of word preferentially utilizes semantic representations. Additionally, do the processing differences seen between words and pictures change the effects of classified and mixed learning on the ABE? To answer these questions, the current study performed two experiments to test whether the ABE is affected by the different types of processing needed for words and pictures used as background information.

The experiment was a 2 (presentation mode: classified learning, mixed learning) × 3 (stimulus type: target, distraction, baseline) mixed design. The presentation mode is the between-subjects variable, and the stimulus type is the within-subjects variable. In experiment 1, in classified learning, category words and words unrelated to the category were presented in sequential groups, and the words in each group were presented randomly. To enhance the effect of classification, a 3000 ms "blink" cue and a 1000 ms "continue" cue were inserted between every two groups of words. In mixed learning, category words and words unrelated to the category were presented randomly through a mixed display, and there were no extra intervals between groups. Sixty students participated in experiment 1, and 78 students participated in experiment 2. Participants were told to read each word aloud while simultaneously monitoring a small indicator above the word. Participants were then instructed to press the space bar as quickly as possible when they saw that the indicator was a “+” (a target) and to withhold a response when they saw that the indicator was a “-” (a distractor) or when they did not see an indicator at all (no indicator). In experiment 2, pictures (brief strokes) were used as background stimuli, and the other task and procedure were similar to those in experiment 1.

The main results were as follows. In Experiment 1, the ABE is robust only in mixed learning; that is, the recognition rate of target-paired words is obviously better than that of distractor-paired words (p = 0.004) and even reaches the level of full attention (baseline words) (p = 0.95). The recognition rate of distractor-paired words is obviously lower than that of baseline words (p = 0.044), showing a typical distraction inhibitory effect. Moreover, there was no significant difference between target-paired words (p = 0.636) and baseline words (p = 0.697) in the two presentation modes, but the recognition rate of distractor-paired words during classified learning was significantly higher than that of mixed learning (p = 0.008). In experiment 2, the ABE was found in both classified and mixed learning modes, but the ABE during classified learning (10%) was lower than that during mixed learning (16%). The recognition rate of target-paired pictures was even better than that of baseline pictures, showing an absolute attention boosting effect. Moreover, there was no significant difference between the recognition rate for the two kinds of target-paired pictures (p = 0.614). However, the recognition rates of distractor-paired pictures (p = 0.043) and baseline pictures (p = 0.036) show differences in the presentation mode. During classified learning, the recognition rates of distractor-paired pictures and baseline pictures are slightly higher than those during mixed learning.

The results suggest that compared with the mixed learning condition, the ABE in the classified learning condition is reduced. Compared with pictures, the ABE for words is more vulnerable to classified learning, which can even makes the ABE disappear. This effect may occur because participants tend to encode relational information in classified learning, which may reduce the inhibitory effect of distraction rejection, thus reducing the difference between target-pair stimuli and distractor-pair stimuli. Therefore, the current study provides more direct evidence for the item-specific account of the ABE.

Key words: attentional boost effect, item-specific information, item-relational information, item-specific account

CLC Number: