›› 2006, Vol. 38 ›› Issue (03): 461-467.
Previous Articles Next Articles
Chen-Ping,Ding-Shuliang,Lin-,Zhou-Jie
Received:
Revised:
Published:
Online:
Contact:
Abstract: Computerized Adaptive Testing (CAT) is one of the most important testing innovations as the result of the advancement of Item Response Theory (IRT). Consequently, many large-scale tests such the GRE and TOFEL have been transformed from their original paper-and-pencil versions to the current CAT versions. However, one limitation of these CAT tests is their reliance on dichotomous IRT models that require each item be scored as either correct or incorrect. Many measurement applications produce polytomous item response data. In addition, the information provided by a polytomous item is considerably more than that provided by a dichotomously scored item. Therefore, for the purpose of improving test quality, it is important to design CATs based on polytomous IRT models. This research is based on the Graded Response Model (GRM). Item selection strategy (ISS) is an important component of CAT. Its performance directly affects the security, efficiency and precision of the test. Thus, ISS becomes one of the central issues in CATs based on the GRM. It is well known that the goal of IIS is to administer the next unused item remaining in the item bank that best fits the examinee’s current ability estimate. In dichotomous IRT models, every item has only one difficulty parameter and the item whose difficulty matches the examinee’s current ability estimate is considered to be the best fitting item. However, in GRM, each item has more than two ordered categories and has no single value to represent the item difficulty. Consequently, some researchers have used to employ the average or the median difficulty value across categories as the difficulty estimate for the item. Using the median value in effect introduced two corresponding ISSs. In this study, we used computer simulation compare four ISSs based on GRM. We also discussed the effect of “shadow pool” on the uniformity of pool usage as well as the influence of different item parameter distributions and different ability estimation methods on the evaluation criteria of CAT. In the simulation process, Monte Carlo method was adopted to simulate the entire CAT process; 1000 examinees drawn from standard normal distribution and four 1000-sized item pools of different item parameter distributions were also simulated. The assumption of the simulation is that a polytomous item is comprised of six ordered categories. In addition, ability estimates were derived using two methods. They were expected a posteriori Bayesian (EAP) and maximum likelihood estimation (MLE). In MLE, the Newton-Raphson iteration method and the Fisher–Score iteration method were employed, respectively, to solve the likelihood equation. Moreover, the CAT process was simulated with each examinee 30 times to eliminate random error. The IISs were evaluated by four indices usually used in CAT from four aspects——the accuracy of ability estimation, the stability of IIS, the usage of item pool, and the test efficiency. Simulation results showed adequate evaluation of the ISS that matched the estimate of an examinee’s current trait level with the difficulty values across categories. Setting “shadow pool” in ISS was able to improve the uniformity of pool utilization. Finally, different distributions of the item parameter and different ability estimation methods affected the evaluation indices of CAT
Key words: graded response model, computerized adaptive testing, item selection strategy, shadow pool
CLC Number:
B841
Chen-Ping,Ding-Shuliang,Lin-,Zhou-Jie. (2006). Item Selection Strategies of Computerized Adaptive Testing based on Graded Response Model. , 38(03), 461-467.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://journal.psych.ac.cn/acps/EN/
https://journal.psych.ac.cn/acps/EN/Y2006/V38/I03/461