ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

   

Model comparison in cognitive modeling

guo, mingqian, Pan, Wanke, Hu, Chuan-Peng   

  • Received:2023-06-25 Revised:2024-04-19 Accepted:2024-04-26
  • Contact: Hu, Chuan-Peng

Abstract: Cognitive modeling has gained widespread application in psychological research. Model comparison plays a crucial role in cognitive modeling, as researchers need to select the best model for subsequent analysis or latent variable inference. Model comparison involves considering not only the fit of the models to the data (balancing overfitting and underfitting) but also the complexity of the parameter data and mathematical forms. This article categorizes and introduces three major classes of model comparison metrics commonly used in cognitive modeling, including: goodness-of-fit metrics (such as mean squared error, coefficient of determination, and ROC curves), cross-validation-based metrics (such as AIC, DIC), and marginal likelihood-based metrics. The computation methods and pros and cons of each metric are discussed, along with practical implementations in R using data from the orthogonal Go/No-Go paradigm. Based on this foundation, the article identifies the suitable contexts for each metric and discusses new approaches such as model averaging in model comparison.

Key words: Cognitive modeling, Computational models, Model comparison, Model selection