ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2022, Vol. 54 ›› Issue (9): 1137-1150.doi: 10.3724/SP.J.1041.2022.01137

• Reports of Empirical Studies • Previous Articles    

Nonparametric cognitive diagnostic computerized adaptive testing using multiple-choice option information

SUN Xiaojian1,2,3, GUO Lei3,4   

  1. 1School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
    2Basic Education Research Centre, Southwest University, Chongqing 400715, China
    3Southwest University Branch, Collaborative Innovation Center of Assessment for Basic Education Quality, Chongqing 400715, China
    4Faculty of Psychology, Southwest University, Chongqing 400715, China
  • Published:2022-09-25 Online:2022-07-21

Abstract:

Most existing cognitive diagnostic computerized adaptive testing (CD-CAT) item selection methods ignore the diagnostic information that distractors provide for multiple-choice (MC) items. Consequently, some useful information is missed and resources are wasted. To overcome this, researchers proposed the Jensen-Shannon divergence (JSD) strategy to select items with the MC-DINA model. However, the JSD strategy needs large samples to obtain reliable estimates of the item parameters before the formal test, and this could compromise the items in the bank. By contrast, the nonparametric method does not require any parameter calibration before the formal test and can be used in small educational programs.
The current study proposes two nonparametric item selection methods (i.e., HDDmc and JDDmc) for CD-CAT with MC items as well as two termination rules (i.e., MR and DR) for variable-length CD-CAT with MC items. Two simulation studies were conducted to examine the performance of these nonparametric item selection methods and termination rules.
The first study examined the performance of the HDDmc and JDDmc with fixed-length CD-CAT. In this study, six factors were manipulated: the number of attributes (K = 4 vs. 6), the structure of the Q-matrix (simple vs. complex), the quality of the item bank (high vs. low vs. mixed), the distribution of the attribute profile (multivariate normal threshold model vs. discrete uniform distribution), the test length (two vs. three vs. four times of K), and the item selection methods (HDDmc vs. JDDmc vs. JSD). Of these, item selection method was the within-group variable, and the rest were between-group variables. Figure 1 showed that the HDDmc and JDDmc produced higher attribute pattern matched ratios (PMRs) than the JSD method for most conditions. In addition, the HDDmc and JDDmc produced similar PMRs for all conditions. Moreover, the HDDmc and JDDmc produced more even distributions of item exposure than the JSD method regarding to test overlap rate, underused item rate, and overused item rate.
The second simulation study investigated the performance of the MR and DR with variable-length CD-CAT. Six factors were also manipulated in this study: the settings for the number of attributes, the structure of the Q-matrix, the quality of the item bank, and the distribution of the attribute profile were the same as in the first study; the other two factors were termination rules (MR, DR, D1, and D3) and item selection methods (HDDmc and JDDmc). Again, the first four were between-group variables, while termination rules and item selection methods were within-group variables. Table 1 showed that: (1) the HDDmc and JDDmc yielded higher PMRs for MR and DR rules than for the D1 and D3 rules; (2) the HDDmc and JDDmc yielded longer test lengths for MR and DR rules than for the D1 and D3 rules, especially for the JDD rule.
In sum, both nonparametric item selection methods and the two new termination rules proved appropriate for CD-CAT with MC items, which means they can be used to balance the trade-off between measurement accuracy and item exposure rate.

Method

Key words: cognitive diagnostic computerized adaptive testing, multiple-choice items, nonparametric item selection method, termination rule