[1] Borghans, L., & Schils, T. (2012). The leaning tower of PISA: Decomposing achievement test scores into cognitive and noncognitive components. The Netherlands: School of Business and Economics, Maastricht University. [2] Clark M. E., Gironda R. J., & Young R. W. (2003). Detection of back random responding: Effectiveness of MMPI-2 and personality assessment inventory validity indices.Psychological Assessment, 15(2), 223-234. [3] Feinberg, R., & Jurich, D. (2018, April). Using rapid responses to evaluate test speededness. Paper presented at the meeting of the National Council of Measurement in Education (NCME), New York, NY. [4] Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences.Statistical Science, 7(4), 457-472. [5] Hong M., Rebouças D. A., & Cheng Y. (2021). Robust estimation for response time modeling.Journal of Educational Measurement. 58(2), 262-280. [6] Köhler C., Pohl S., & Carstensen C. H. (2017). Dealing with item nonresponse in large-scale cognitive assessments: The impact of missing data methods on estimated explanatory relationships.Journal of Educational Measurement, 54(4), 397-419. [7] Liu Y., Cheng Y., & Liu H. (2020). Identifying effortful individuals with mixture modeling response accuracy and response time simultaneously to improve item parameter estimation.Educational and Psychological Measurement, 80(4), 775-807. [8] Liu, Y., & Liu, H. (2021). Detecting noneffortful responses based on a residual method using an iterative purification process.Journal of Educational and Behavioral Statistics, 46(6), 717-752. [9] Lu J., Wang C., Zhang J., & Tao J. (2020). A mixture model for responses and response times with a higher‐order ability structure to detect rapid guessing behaviour.British Journal of Mathematical and Statistical Psychology, 73(2), 261-288. [10] Matzke D., Love J., & Heathcote A. (2017). A Bayesian approach for estimating the probability of trigger failures in the stop-signal paradigm.Behavior Research Methods, 49(1), 267-281. [11] McHugh, M. L. (2013). The chi-square test of independence.Biochemia medica, 23(2), 143-149. [12] Molenaar D., Bolsinova M., & Vermunt J. K. (2018). A semi-parametric within-subject mixture approach to the analyses of responses and response times.British Journal of Mathematical and Statistical Psychology, 71(2), 205-228. [13] Pastor D. A., Ong T. Q., & Strickman S. N. (2019). Patterns of solution behavior across items in low-stakes assessments.Educational Assessment, 24(3), 189-212. [14] Plummer, M. (2003, March). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. Retrieved from https://www.r-project.org/conferences/DSC-2003/Drafts/Plummer.pdf [15] Qian H., Staniewska D., Reckase M., & Woo A. (2016). Using response time to detect item preknowledge in computer-based licensure examinations.Educational Measurement: Issues and Practice, 35(1), 38-47. [16] Ranger J., Wolgast A., & Kuhn J. T. (2019). Robust estimation of the hierarchical model for responses and response times.British Journal of Mathematical and Statistical Psychology, 72(1), 83-107. [17] R Development Core Team. (2009). R: A language and environment for statistical computing [Computer software Manual]. Vienna, Austria: Retrieved from http://www.Rproject. org (ISBN 3-900051-07-0 [18] Rios J. A., Guo H., Mao L., & Liu O. L. (2017). Evaluating the impact of careless responding on aggregated-scores: To filter unmotivated examinees or not?International Journal of Testing, 17(1), 74-104. [19] Rose, N. (2013). Item nonresponses in educational and psychological measurement (Unpublished Doctorial dissertation). Friedrich Schiller University, Jena, Germany. [20] Setzer J. C., Wise S. L., van den Heuvel, J. R., & Ling G. (2013). An investigation of examinee test-taking effort on a large-scale assessment.Applied Measurement in Education, 26(1), 34-49. [21] Ulitzsch E., von Davier M., & Pohl S. (2020). A hierarchical latent response model for inferences about examinee engagement in terms of guessing and item‐level non‐response.British Journal of Mathematical and Statistical Psychology, 73(S1), 83-112. [22] van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items.Psychometrika, 72(3), 287-308. [23] van der Linden, W. J., & Guo, F. (2008). Bayesian procedures for identifying aberrant response-time patterns in adaptive testing.Psychometrika, 73(3), 365-384. [24] Wang, C., & Xu, G. (2015). A mixture hierarchical model for response times and response accuracy.British Journal of Mathematical and Statistical Psychology, 68(3), 456-477. [25] Wang C., Xu G., & Shang Z. (2018). A two-stage approach to differentiating normal and aberrant behavior in computer based testing.Psychometrika, 83(1), 223-254. [26] Wang C., Xu G., Shang Z., & Kuncel N. (2018). Detecting aberrant behavior and item preknowledge: A comparison of mixture modeling method and residual method.Journal of Educational and Behavioral Statistics, 43(4), 469-501. [27] Wise, S. L. (2015). Effort analysis: Individual score validation of achievement test data.Applied Measurement in Education, 28(3), 237-252. [28] Wise, S. L. (2017). Rapid-guessing behavior: Its identification, interpretation, and implications.Educational Measurement: Issues and Practice, 36(4), 52-61. [29] Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model.Journal of Educational Measurement, 43(1), 19-38. [30] Wise, S. L., & Kingsbury, G. G. (2016). Modeling student test- taking motivation in the context of an adaptive achievement test.Journal of Educational Measurement, 53(1), 86-105. |