%A LIU Juan, ZHENG Chanjin, LI Yunchuan, LIAN Xu %T IRT-based scoring methods for multidimensional forced choice tests %0 Journal Article %D 2022 %J Advances in Psychological Science %R 10.3724/SP.J.1042.2022.01410 %P 1410-1428 %V 30 %N 6 %U {https://journal.psych.ac.cn/xlkxjz/CN/abstract/article_6280.shtml} %8 2022-06-15 %X

Forced-choice (FC) test is widely used in non-cognitive testing because of its effectiveness in resisting faking and the response bias caused by traditional Likert method. The traditional scoring of forced-choice test produces ipsative data, which has been criticized for being unsuitable for inter-individual comparisons. In recent years, the development of multiple forced-choice IRT models that allow researchers to obtain normative information from forced-choice test has re-ignited the interest of researchers and practitioners in forced-choice IRT models. The six prevailing forced-choice IRT models in existing studies can be classified according to the adopted decision model and item response model. The TIRT model, the RIM model and the BRB-IRT model for which the decision model is Thurstone’s Law of Comparative Judgment, and the MUPP framework and its derivatives for which the Luce Choice Axiom is adopted. In terms of item response mode, both the MUPP-GGUM and GGUM-RANK models are applicable to items with unfolding response mode, while the other forced-choice models are applicable to items with dominant response mode. In the parameter estimation method, it can also be distinguished according to the estimation algorithm and the estimation process. MUPP-GGUM uses a two-step strategy for parameter estimation, and it uses Likert scale to calibrate item parameters in advance, so that it can facilitate subsequent item bank management, while the others use joint estimation methods. For joint estimation, TIRT uses the traditional estimation algorithms: weighted least squares (WLS)/diagonally weighted least squares (DWLS), both of which are conveniently used in Mplus and take relatively little time, but at the same time they suffer from poor convergence and high computer memory usage in high-dimensional situations. The other model uses the Markov chain Monte Carlo (MCMC) algorithm, which effectively solves the convergence and insufficient memory in traditional algorithms, but the estimation time is longer and much slower than the traditional algorithms.
The research on the application of the forced-choice IRT model is summarized in three areas: parameter invariance testing, computerized adaptive testing (CAT) and validity study. Parameter invariance testing can be divided into cross-block consistency and cross-population consistency (also known as DIF), with more research currently focusing on the latter, for example, there are already DIF testing methods for TIRT and RIM. While enriching or upgrading existing DIF testing methods is needed in future research in addition to develop other forced-choice model DIF testing methods so as to be more sensitive to DIF from multiple sources. Non-cognitive tests are usually high-dimensional, and the tests length problems caused by high dimensionality can be naturally addressed by CAT. There are studies that have already explored appropriate item selection strategies for the MUPP-GGUM, GGUM-RANK and RIM models. Future research can continue to explore item selection strategies for different forced-choice IRT models to ensure that the forced-choice CAT test can achieve a balance between measurement precision and test length in high-dimensional context. Validity studies focus on whether the scores obtained from the forced selection IRT model reflect the true characteristics of individuals, as tests that are not validated have huge pitfalls in the interpretation of the results. Some studies have compared IRT scores, traditional scores, and Likert-type scores to see whether IRT scores can yield similar results to Likert scores, whether they perform better than traditional scores in terms of recovery of latent traits. However, the use of Likert scale scores as criterion may introduces response bias as a source of error, and future research can focus on obtaining purer, more convincing criterion.