ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

• •    

认知建模中模型比较的方法

郭鸣谦, 潘晚坷, 胡传鹏   

  • 收稿日期:2023-06-25 修回日期:2024-04-19 接受日期:2024-04-26
  • 通讯作者: 胡传鹏

Model comparison in cognitive modeling

guo, mingqian, Pan, Wanke, Hu, Chuan-Peng   

  • Received:2023-06-25 Revised:2024-04-19 Accepted:2024-04-26
  • Contact: Hu, Chuan-Peng

摘要: 认知建模近年来在科学心理学获得广泛应用,而模型比较是认知建模中关键的一环:研究者需要通过模型比较来选择出最优模型,才能进行后续的假设检验或潜变量推断。模型比较不仅要考虑模型对数据的拟合(平衡过拟合与欠拟合),也需要考虑参数数据和数学形式的复杂度。然而,模型比较指标众多,纷繁复杂。将认知建模常用的模型比较的指标分为三大类,并介绍了其计算方法及优劣,包括拟合优度指标(包括平均平方误差、决定系数、RUC曲线等)、基于交叉验证的指标(包括AIC、DIC等)和基于边际似然的指标。结合正交Go /No-Go范式下的数据,展示各指标在R语言中如何实现。在此基础上,探讨各指标的适用情境,介绍模型平均等模型比较的新思路。

关键词: 认知建模, 计算模型, 模型选择, 模型比较

Abstract: Cognitive modeling has gained widespread application in psychological research. Model comparison plays a crucial role in cognitive modeling, as researchers need to select the best model for subsequent analysis or latent variable inference. Model comparison involves considering not only the fit of the models to the data (balancing overfitting and underfitting) but also the complexity of the parameter data and mathematical forms. This article categorizes and introduces three major classes of model comparison metrics commonly used in cognitive modeling, including: goodness-of-fit metrics (such as mean squared error, coefficient of determination, and ROC curves), cross-validation-based metrics (such as AIC, DIC), and marginal likelihood-based metrics. The computation methods and pros and cons of each metric are discussed, along with practical implementations in R using data from the orthogonal Go/No-Go paradigm. Based on this foundation, the article identifies the suitable contexts for each metric and discusses new approaches such as model averaging in model comparison.

Key words: Cognitive modeling, Computational models, Model comparison, Model selection