A general simulation comparison of the predictive validity between bifactor and high-order factor models

WEN Zhonglin^{1}(),TANG Dandan^{1},GU Honglei^{2}

1 Center for Studies of Psychological Application / School of Psychology, South China Normal University, Guangzhou 510631, China 2 School of Education Science, Xinyang Normal University, Xinyang 464000, China

Mathematically, a high-order factor model is nested within a bifactor model, and the two models are equivalent with a set of proportionality constraints of loadings. In applied studies, they are two alternative models. Using a true model with the proportional constraints to create simulation data (thus both the bifactor model and high-order factor model fitted the true model), Xu, Yu and Li (2017) studied structural coefficients based on bifactor models and high-order factor models by comparing the goodness of fit indexes and the relative bias of the structural coefficient in a simulation study. However, a bifactor model usually doesn’t satisfy the proportionality constraints, and it is very difficult to find a multidimensional construct that is well fitted by a bifactor model with the proportionality constraints. Hence their simulation results couldn’t extend to general situations. Using a true model with the proportionality constraints (thus both the bifactor model and high-order factor model fitted the true model) and a true model without the proportionality constraints (thus the bifactor model fitted the true model, whereas the high-order factor model fitted a misspecified model), this Monte Carlo study investigated structural coefficients based on bifactor models and high-order factor models for either a latent or manifest variable as the criterion. Experiment factors considered in the simulation design were: (a) the loadings on the general factor, (b) the loadings on the domain specific factors, (c) the magnitude of the structural coefficient, (d) sample size. When the true model without proportionality constraints, only factors (a), (c) and (d) were considered because the loadings on domain specific factors were fixed to different levels (0.4, 0.5, 0.6, 0.7) that assured the model does not satisfy the proportionality constraints. The main findings were as follows. (1) When the proportionality constraints were held, the high-order factor model was preferred, because it had smaller relative bias of the structural coefficient, and lower type Ⅰ error rates (but also lower statistical power, which was not a problem for a large sample). (2) When the proportionality constraints were not held, however, the bifactor model was better, because it had smaller relative bias of the structural coefficient, and higher statistical power (but also higher type Ⅰ error rates, which was not a problem for a large sample). (3) Bi-factor models fitted the simulation data better than high-order factor models in terms of fit indexes CFI, TLI, RMSEA, and SRMR whether the proportionality constraints were held or not. However, the bifactor models were less fitted according to information indexes (i.e., AIC, ABIC) when the proportionality constraints were held. (4) Whether the criterion was a manifest variable or a latent variable, the results were similar. However, for the manifest criterion variable, the relative bias of the structural coefficient was smaller. In conclusion, a high-order factor model could be the first choice to predict a criterion under the condition of proportionality constraints or well fitted for the sake of parsimony. Otherwise, a bifactor model is better for studying structural coefficients. The sample size should be large enough (e.g., 500+) no matter which model is employed.

Zhonglin WEN,Dandan TANG,Honglei GU. A general simulation comparison of the predictive validity between bifactor and high-order factor models[J]. Acta Psychologica Sinica,
2019, 51(3): 383-391.

Beaujean A. A., Parkin J., &Parker S . ( 2014). Comparing Cattell-Horn-Carroll factor models: Differences between bifactor and higher order factor models in predicting language achievement. Psychological Assessment, 26( 3), 789-805.
pmid: 24840178
url: http://doi.apa.org/getdoi.cfm?doi=10.1037/a0036745

Bradley, &James V. ( 1978). Robustness? British Journal of Mathematical & Statistical Psychology, 31, 144-152.

[4]

Burnham K.P., &Anderson D.R . ( 1998). Model selection and inference: A practical information-theoretic approach. New York, NY:Springer.

[5]

Chen F. F., Hayes A., Carver C. S., Laurenceau J-P., &Zhang Z . ( 2012). Modeling general and specific variance in multifaceted constructs: A comparison of the bifactor model to other approaches. Journal of Personality, 80( 1), 219-251.
pmid: 22092195
url: http://doi.wiley.com/10.1111/jopy.2012.80.issue-1

[6]

Chen F. F., Jing Y., Hayes A., &Lee J. M . ( 2013). Two concepts or two approaches? A bifactor analysis of psychological and subjective well-being. Journal of Happiness Studies, 14( 3), 1033-1068.
url: http://link.springer.com/10.1007/s10902-012-9367-x

Cucina J., &Byle K ., ( 2017). The bifactor model fits better than the higher-order model in more than 90% of comparisons for mental abilities test batteries. Journal of Intelligence, 5( 3), 27.
url: http://www.mdpi.com/2079-3200/5/3/27

[9]

Demars C.E . ( 2006). Application of the bi-factor multidimensional item response theory model to testlet- based tests. Journal of Educational Measurement, 43( 2), 145-168.
url: http://www.blackwell-synergy.com/toc/jedm/43/2

[10]

Distefano C., Greer F. W., &Kamphaus R. W . ( 2013). Multifactor modeling of emotional and behavioral risk of preschool-age children. Psychological Assessment, 25( 2), 467-476.
pmid: 23356680
url: http://doi.apa.org/getdoi.cfm?doi=10.1037/a0031393

[11]

Gignac G.E . ( 2008). Higher-order models versus direct hierarchical models: A superordinate or breadth factor?. Psychology Science Quarterly, 50( 1), 21-43.

[12]

Gu H., &Wen Z .( 2017). Reporting and interpreting multidimensional test scores: A bi-factor perspective. Psychological Development and Education, 33, 504-512.
url: http://d.wanfangdata.com.cn/Periodical/xlfzyjy201704015

Gu H., Wen Z., &Fan X . ( 2017 a). Structural validity of the Machiavellian personality scale: A bifactor exploratory structural equation modeling approach. Personality and Individual Differences, 105, 116-123.
url: https://linkinghub.elsevier.com/retrieve/pii/S0191886916309965

[14]

Gu H., Wen Z., &Fan X . ( 2017 b). Examining and controlling for wording effect in a self-report measure: A Monte Carlo simulation study. Structural Equation Modeling: A Multidisciplinary Journal, 24( 4), 545-555.
url: https://www.tandfonline.com/doi/full/10.1080/10705511.2017.1286228

Hoogland J.J., &Boomsma A .( 1998). Robustness studies in covariance structure modeling: An overview and a meta- analysis. Sociological Methods & Research, 26( 3), 329-368.

[18]

Howard J. L., Gagné M., Morin A. J. S., &Forest J . ( 2018). Using bifactor exploratory structural equation modeling to test for a continuum structure of motivation. Journal of Management.44( 7), 2638-2664.

[19]

Hyland P., Boduszek D., Dhingra K., Shevlin M., &Egan A . ( 2014). A bifactor approach to modelling the Rosenberg Self Esteem Scale. Personality and Individual Differences, 66, 188-192.
url: https://linkinghub.elsevier.com/retrieve/pii/S0191886914002025

[20]

Mackinnon D. P., Lockwood C. M., &Williams J . ( 2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39( 1), 99-128.
url: http://www.tandfonline.com/doi/abs/10.1207/s15327906mbr3901_4

[21]

Marsh H. W., Hau K. T., &Wen Z. L . ( 2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11( 3), 320-341.
url: http://www.tandfonline.com/doi/abs/10.1207/s15328007sem1103_2

Reise S. P., Scheines R., Widaman K. F., &Haviland M. G . ( 2013). Multidimensionality and structural coefficient bias in structural equation modeling: A bifactor perspective. Educational and Psychological Measurement, 73( 1), 5-26.
url: http://journals.sagepub.com/doi/10.1177/0013164412449831

[24]

Salerno L., Ingoglia S., &Coco G. L . ( 2017). Competing factor structures of the Rosenberg Self-Esteem Scale (RSES) and its measurement invariance across clinical and non-clinical samples. Personality and Individual Differences, 113, 13-19.
url: https://linkinghub.elsevier.com/retrieve/pii/S0191886917301526

Wang M. T., Fredricks J. A., Ye F., Hofkens T. L., &Linn J. S . ( 2016). The math and science engagement scales: Scale development, validation, and psychometric properties. Learning and Instruction, 43, 16-26.
url: https://linkinghub.elsevier.com/retrieve/pii/S0959475216300081

[27]

Wen Z., Hau K.T., &Marsh H.W . ( 2004). Structural equation model testing: Cutoff criteria for goodness of fit indices and chi-square test. Acta Psychologica Sinica, 36( 2), 186-194.
url: http://www.cnki.com.cn/Article/CJFDTotal-XLXB200402009.htm

Wu Y., Wen Z., Marsh H. W., &Hau K-T ., ( 2013). A comparison of strategies for forming product indicators for unequal numbers of items in structural equation models of latent interactions. Structural Equation Modeling: A Multidisciplinary Journal, 20( 4), 551-567.
url: http://www.tandfonline.com/doi/abs/10.1080/10705511.2013.824772

[29]

Xu S. X., Yu Z. H., &Li Y. M . ( 2017). Simulated data comparison of the predictive validity between bi-factor and high-order models. Acta Psychologica Sinica, 49( 8), 1125-1136.
url: http://www.cqvip.com/QK/90117X/201708/672961914.html

Yung Y-F., Thissen D., &Mcleod L. D . ( 1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64( 2), 113-128.
url: http://link.springer.com/10.1007/BF02294531