ISSN 0439-755X
CN 11-1911/B

›› 2011, Vol. 43 ›› Issue (07): 836-850.

Previous Articles    

Item Replenishing in Cognitive Diagnostic Computerized Adaptive Testing

CHEN Ping;XIN Tao   

  1. Institute of Developmental Psychology, Beijing Normal University, Beijing 100875, China
  • Received:2010-11-18 Revised:1900-01-01 Published:2011-07-30 Online:2011-07-30
  • Contact: XIN Tao

Abstract: Item replenishing is essential for item bank maintenance and development in cognitive diagnostic computerized adaptive testing (CD-CAT). Compared with item replenishing in regular CAT, item replenishing in CD-CAT is more complicated because it requires constructing the Q matrix (Embretson, 1984; Tatsuoka, 1995) corresponding to the new items (denoted as Qnew_item). However, the Qnew_item is often constructed manually by content experts and psychometricians, which brings about two issues: first, it takes experts a lot of time and efforts to discuss and complete the attribute identification task, especially when the number of new items is large; second, the Qnew_item identified by experts is not guaranteed to be totally correct because experts often disagree in the discussion. Therefore, this study borrowed the main idea of joint maximum likelihood estimation (JMLE) method in unidimensional item response theory (IRT) to propose the joint estimation algorithm (JEA), which depended fully on the examinees’ responses on the operational and new items to jointly estimate the Qnew_item and the item parameters of new items automatically in the context of CD-CAT under the Deterministic Inputs, Noisy “and” Gate (DINA) model.
A simulation study was conducted to investigate whether the JEA algorithm could accurately and efficiently estimate the Qnew_item and the item parameters of new items under different sample sizes and different levels of item parameter range, and the new items were randomly seeded in the random positions of examinees’ CD-CAT tests. In this study, four samples (sample sizes were 100, 300, 1000 and 3000 respectively) were simulated and each examinee had 50% probability of mastering each attribute. On the other hand, three item banks of 360 items were simulated and their item parameters were randomly drawn from U (0.05, 0.25), U (0.15, 0.35) and U (0.25, 0.45) respectively, and the three item banks shared the same Q matrix. 20 new items were simulated and the Qnew_item was constructed by randomly selecting 20 rows from the Q matrix, and the item parameters of new items were randomly drawn from U (0.05, 0.25) or U (0.15, 0.35) or U (0.25, 0.45) depending on the item parameter range of the corresponding operational items. The Shannon Entropy method was employed to select the next available item from the item bank, the Maximum A Posterior method was used to update the knowledge state estimates of examinees, and the fixed-length stopping rule was adopted and the test length was 20.
The results indicated that the JEA worked well in terms of the estimation accuracy of the Qnew_item and the item parameters of new items especially when the item parameter sizes were relatively small and the sample sizes were relatively large. And as the sample size increased, the estimation accuracy of attribute vectors were monotone increasing under all conditions, the calibration accuracy of the guessing and slipping parameters were monotone decreasing under most conditions. Also the sample size, item parameter size and initial item parameter all had effects on the performance of JEA.
Though the results from the simulation study are very encouraging, further studies are proposed for the future investigations such as different cognitive diagnostic models and different attribute hierarchical structures.

Key words: cognitive diagnostic computerized adaptive testing, item replenishing, on-line calibration, automatic attribute identification, new item