ISSN 0439-755X
CN 11-1911/B
主办:中国心理学会
   中国科学院心理研究所
出版:科学出版社

心理学报 ›› 2006, Vol. 38 ›› Issue (03): 461-467.

• • 上一篇    下一篇

等级反应模型下计算机化自适应测验选题策略

陈平;丁树良;林海菁;周婕   

  1. 江西师范大学计算机信息工程学院,南昌 330027
  • 收稿日期:2005-05-09 修回日期:1900-01-01 发布日期:2006-05-30 出版日期:2006-05-30
  • 通讯作者: 丁树良

Item Selection Strategies of Computerized Adaptive Testing based on Graded Response Model

Chen-Ping,Ding-Shuliang,Lin-,Zhou-Jie   

  1. Computer Information Engineering College, Jiangxi Normal University, Nanchang 330027, China
  • Received:2005-05-09 Revised:1900-01-01 Online:2006-05-30 Published:2006-05-30
  • Contact: Ding Shuliang

摘要: 计算机化自适应测验(CAT)中的选题策略,一直是国内外相关学者关注的问题。然而对多级评分的CAT的选题策略的研究却很少报导。本研究采用计算机模拟程序对等级反应模型(Graded Response Model)下CAT的四种选题策略进行研究。研究表明:等级难度值与当前能力估计值匹配选题策略的综合评价最高;在选题策略中增设 “影子题库”可以明显提高项目调用的均匀性;并且不同的项目参数分布或不同的能力估计方法都对CAT评价指标有影响

关键词: 等级反应模型, 计算机化自适应测验, 选题策略, 影子题库

Abstract: Computerized Adaptive Testing (CAT) is one of the most important testing innovations as the result of the advancement of Item Response Theory (IRT). Consequently, many large-scale tests such the GRE and TOFEL have been transformed from their original paper-and-pencil versions to the current CAT versions. However, one limitation of these CAT tests is their reliance on dichotomous IRT models that require each item be scored as either correct or incorrect. Many measurement applications produce polytomous item response data. In addition, the information provided by a polytomous item is considerably more than that provided by a dichotomously scored item. Therefore, for the purpose of improving test quality, it is important to design CATs based on polytomous IRT models. This research is based on the Graded Response Model (GRM).
Item selection strategy (ISS) is an important component of CAT. Its performance directly affects the security, efficiency and precision of the test. Thus, ISS becomes one of the central issues in CATs based on the GRM. It is well known that the goal of IIS is to administer the next unused item remaining in the item bank that best fits the examinee’s current ability estimate. In dichotomous IRT models, every item has only one difficulty parameter and the item whose difficulty matches the examinee’s current ability estimate is considered to be the best fitting item. However, in GRM, each item has more than two ordered categories and has no single value to represent the item difficulty. Consequently, some researchers have used to employ the average or the median difficulty value across categories as the difficulty estimate for the item. Using the median value in effect introduced two corresponding ISSs.
In this study, we used computer simulation compare four ISSs based on GRM. We also discussed the effect of “shadow pool” on the uniformity of pool usage as well as the influence of different item parameter distributions and different ability estimation methods on the evaluation criteria of CAT. In the simulation process, Monte Carlo method was adopted to simulate the entire CAT process; 1000 examinees drawn from standard normal distribution and four 1000-sized item pools of different item parameter distributions were also simulated. The assumption of the simulation is that a polytomous item is comprised of six ordered categories. In addition, ability estimates were derived using two methods. They were expected a posteriori Bayesian (EAP) and maximum likelihood estimation (MLE). In MLE, the Newton-Raphson iteration method and the Fisher–Score iteration method were employed, respectively, to solve the likelihood equation. Moreover, the CAT process was simulated with each examinee 30 times to eliminate random error. The IISs were evaluated by four indices usually used in CAT from four aspects——the accuracy of ability estimation, the stability of IIS, the usage of item pool, and the test efficiency. Simulation results showed adequate evaluation of the ISS that matched the estimate of an examinee’s current trait level with the difficulty values across categories. Setting “shadow pool” in ISS was able to improve the uniformity of pool utilization. Finally, different distributions of the item parameter and different ability estimation methods affected the evaluation indices of CAT

Key words: graded response model, computerized adaptive testing, item selection strategy, shadow pool

中图分类号: