A STUDY OF SCORE EQUATING IN THE COLLEGE ENGLISH TEST: A NEW APPROACH BASED ON “ANCHOR ITEMS” AND TWO-PARAMETER IRT MODEL
2005, 37 (02):
In China’s College English Test (CET), Rasch model has been used in the score equating procedure for 15 years and lots of score equating data have been accumulated. This paper discusses in detail some demerits of the score equating method based on Rasch model, and introduces a new score equating approach based on “anchor items” and two-parameters IRT model (the Item Response Theory model).
It is assumed that for the old score equating method based on Rasch model: 1)The students in the control group give equal attention to both the formal and the control papers. 2)There has been no leakage of the items in either paper. 3) All items have the same Discrimination Index.
A failure in assumption 1) would usually occur because the students feel that the control paper test is an extra burden to them and they often do not give it the same importance as the formal paper. In this case their marks on the control paper would be lower than their true performance. If the two papers were, in fact, equally easy or difficult they would score lower marks on the control paper, thus making it appear harder. This would have the effect of making the formal paper seem to be relatively easier and in the process of equating the students’ marks would be reduced.
If assumption 2) is not true and the control paper has not truly been kept confidential, the effects would be in the opposite direction. The candidates would do better than they should on the control paper, causing their marks on that paper to be relatively high in comparison with the formal test. The latter test would therefore appear to the equating algorithm to be harder than it really is and all the students’ marks would be increased. Note that this would be true even if only a few items were leaked. For example, if just one Reading passage were leaked, together with the associated items, those five items would be scored correct for students who might otherwise have failed at least in some of them. Since reading items have double weight, this could falsely increase the score of weaker students by up to 10 marks! Of course, the effect on the mean score would be smaller since many students would have scored on these items anyway.
It might also be argued that, since there is evidence that the items do not all have the same Discrimination Index, the two/three-parameter IRT model should be used.
It has to be accepted that any equating step will increase the standard error of measurement (SEM) of the final score because the parameters that need to be used for equating will be estimated with some standard error of their own. However, this increase will usually be small (given the sample size of several hundred used to do the model fitting) and should be more than compensated for by the reduction in the “between-forms” bias, which the equating procedure is designed to correct.
In this paper, a pilot study with real CET test data is reported with satisfactory score equating results.
Related Articles |