Advances in Psychological Science ›› 2024, Vol. 32 ›› Issue (10): 1736-1756.doi: 10.3724/SP.J.1042.2024.01736
• Research Method • Previous Articles
GUO Mingqian, PAN Wanke, HU Chuanpeng
Received:
2023-06-25
Online:
2024-10-15
Published:
2024-08-13
GUO Mingqian, PAN Wanke, HU Chuanpeng. Model comparison in cognitive modeling[J]. Advances in Psychological Science, 2024, 32(10): 1736-1756.
[1] 胡传鹏, 孔祥祯, Wagenmakers, E.-J., Ly A.,彭凯平. (2018). 贝叶斯因子及其在JASP中的实现. [2] 区健新, 吴寅, 刘金婷, 李红.(2020). 计算精神病学: 抑郁症研究和临床应用的新视角. [3] 王允宏, van den Berg, D., Aust F., Ly A., Wagenmaker E.-J., 胡传鹏. (2023). 贝叶斯方差分析在JASP中的实现. 心理技术与应用, 11(9), 528-541. http://dx.doi.org/10.16842/j.cnki.issn2095-5588.2023.09.002 [4] Acerbi L., Dokka K., Angelaki D. E., & Ma W. J. (2018). Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. [5] Ahn W. Y., Haines N., & Zhang L. (2017). Revealing neurocomputational mechanisms of reinforcement learning and decision-making with the hBayesDM package. [6] Akaike, H. (1974). A new look at the statistical model identification. [7] Anderson, D., & Burnham, K. (2004). [8] Ballard I. C., Wagner A. D., & McClure S. M. (2019). Hippocampal pattern separation supports reinforcement learning. [9] Betts, M. J., Richter, A., de Boer, L., Tegelbeckers, J., Perosa, V., Baumann, V.,. Krauel, K.(2020). Learning in anticipation of reward and punishment: Perspectives across the human lifespan. [10] Bishop, C. M. (2006). [11] Blei D. M., Kucukelbir A.,& McAuliffe, J. D.(2017). Variational inference: A review for statisticians. [12] Boehm U., Annis J., Frank M. J., Hawkins G. E., Heathcote A., Kellen D.,. Wagenmakers, E.-J.(2018). Estimating across-trial variability parameters of the Diffusion Decision Model: Expert advice and recommendations. [13] Boehm U., Evans N. J., Gronau Q. F., Matzke D., Wagenmakers E.-J., & Heathcote, A. J. (2023). Inclusion Bayes factors for mixed hierarchical diffusion decision models. Psychological Methods. https://doi.org/10.1037/met0000582Bos, C. S. (2002). A comparison of marginal likelihood computation methods. In: Härdle, W., & Rönz, B. (Eds.), Compstat: Physica, Heidelberg. [14] Brown S. D.,& Heathcote, A.(2008). The simplest complete model of choice response time: linear ballistic accumulation. [15] Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. [16] Carpenter B., Gelman A., Hoffman M. D., Lee D., Goodrich B., Betancourt M.,. Riddell A. (2017). Stan: A probabilistic programming language. [17] Casella, G., & Berger, R. L. (2002). [18] Cavanagh J. F., Eisenberg I., Guitart-Masip M., Huys Q., & Frank M. J. (2013). Frontal theta overrides pavlovian learning biases. [19] Clyde M. A., Ghosh J.,& Littman, M. L.(2011). Bayesian adaptive sampling for variable selection and model averaging. [20] Collins, A. G., & Frank, M. J. (2012). How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis. [21] Collins, A. G. E., & Frank, M. J. (2018). Within- and across- trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory. [22] Daniel R., Radulescu A., & Niv Y. (2020). Intact reinforcement learning but impaired attentional control during multidimensional probabilistic learning in older adults. [23] Daunizeau J., Adam V., & Rigoux L. (2014). VBA: A probabilistic treatment of nonlinear models for neurobiological and behavioural data. [24] Davis, J., & Goadrich, M. (2006). [25] Daw N. D.(2011). Trial-by-trial data analysis using computational models. In Delgado, M. R. (Ed.), Decision making, affect, learning: Attention performance XXIII (Vol. 23, pp. 3-38). Oxford University Press. [26] Dayan P., Niv Y., Seymour B.,& Daw, N. D.(2006). The misbehavior of value and the discipline of the will. [27] Devine S., Falk C. F., & Fujimoto K. A. (2023). Comparing the accuracy of three predictive information criteria for Bayesian linear multilevel model selection. [28] Dickey, J. (1973). Scientific reporting and personal probabilities: Student's hypothesis. [29] Dickey, J. M. (1976). Approximate posterior distributions. [30] Donkin C., Heathcote A., & Brown S. (2009). [31] Dorfman, H. M., & Gershman, S. J. (2019). Controllability governs the balance between Pavlovian and instrumental action selection. [32] Doucet A.,& Johansen, A. M. (2009). A tutorial on particle filtering and smoothing: Fifteen years later In Crisan, D (Ed), Handbook of nonlinear filtering Oxford University Press Fifteen years later. [33] Dziak J. J., Coffman D. L., Lanza S. T., Li R., & Jermiin L. S. (2020). Sensitivity and specificity of information criteria. [34] Evans, N. J. (2019). Assessing the practical differences between model selection methods in inferences about choice response time tasks. [35] Evans N. J., Hawkins G. E., & Brown S. D. (2020). The role of passing time in decision-making. [36] Farrell S.,& Lewandowsky, S. (2018). Computational modeling of cognition and behavior. Cambridge University Press. [37] Fong, E., & Holmes, C. C. (2020). On the marginal likelihood and cross-validation. [38] Fontanesi L., Gluth S., Spektor M. S., & Rieskamp J. (2019). A reinforcement learning diffusion decision model for value-based decisions. [39] Forstmann B. U., Ratcliff R., & Wagenmakers E. J. (2016). Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. [40] Friedman J., Hastie T., & Tibshirani R. (2001). [41] Friston K., Kilner J.,& Harrison, L.(2006). A free energy principle for the brain. [42] Friston K., Mattout J.,Trujillo-Barreto, N., Ashburner, J., & Penny, W.(2007). Variational free energy and the Laplace approximation. [43] Gamerman, D., & Lopes, H. F. (2006). [44] Geisser S.,& Eddy, W. F.(1979). A predictive approach to model selection. [45] Gelfand, A. E., & Dey, D. K. (1994). Bayesian model choice: Asymptotics and exact calculations. [46] Gelman A., Carlin J. B., Stern H. S., Dunson D. B., Vehtari A., & Rubin D. B. (2013). [47] Gelman A., Hwang J., & Vehtari A. (2013). Understanding predictive information criteria for Bayesian models. [48] Gershman S. J.(2016). Empirical priors for reinforcement learning models. [49] Gronau Q. F., Sarafoglou A., Matzke D., Ly A., Boehm U., Marsman M.,. Steingroever, H.(2017). A tutorial on bridge sampling. [50] Gronau, Q. F., & Wagenmakers, E. J. (2019). Limitations of Bayesian Leave-One-Out Cross-Validation for model selection. Computational Brain & Behavior, 2(1), 1-11. https://doi.org/10.1007/s42113-018-0011-7 [51] Guitart-Masip,M., Huys, Q. J., Fuentemilla, L., Dayan, P., Duzel, E., & Dolan, R. J.(2012). Go and no-go learning in reward and punishment: Interactions between affect and effect. [52] Hair J. F., Black W. C., Babin B. J., & Anderson R. E. (2010). [53] Hammersley, J. (2013). [54] Heck, D. W. (2019). A caveat on the Savage-Dickey density ratio: The case of computing Bayes factors for regression parameters. [55] Hinne M., Gronau Q. F., van den Bergh D., & Wagenmakers E.-J. (2020). A conceptual introduction to Bayesian model averaging. [56] Hurvich, C. M., & Tsai, C.-L. (1989). Regression and time series model selection in small samples. [57] Huys Q. J., Cools R., Gölzer M., Friedel E., Heinz A., Dolan R. J., & Dayan P. (2011). Disentangling the roles of approach, activation and valence in instrumental and pavlovian responding. [58] Huys Q. J., Maia T. V., & Frank M. J. (2016). Computational psychiatry as a bridge from neuroscience to clinical applications. [59] Iglesias S., Mathys C., Brodersen K. H., Kasper L., Piccirelli M., den Ouden, H. E., & Stephan, K. E.(2013). Hierarchical prediction errors in midbrain and basal forebrain during sensory learning. [60] Ikink, I., Engelmann, J. B., van den Bos, W., Roelofs, K., & Figner, B.(2019). Time ambiguity during intertemporal decision-making is aversive, impacting choice and neural value coding. [61] Kass R. E.,& Raftery, A. E.(1995). Bayes factors. [62] Körding K. P.,& Wolpert, D. M.(2006). Bayesian decision theory in sensorimotor control. [63] Kvålseth T. O.(1985). Cautionary note about R2. [64] Lebreton M., Bacily K., Palminteri S., & Engelmann J. B. (2019). Contextual influence on confidence judgments in human reinforcement learning. [65] Li J., Schiller D., Schoenbaum G., Phelps E. A., & Daw N. D. (2011). Differential roles of human striatum and amygdala in associative learning. [66] Li J. A., Dong D., Wei Z., Liu Y., Pan Y., Nori F., & Zhang X. (2020). Quantum reinforcement learning during human decision-making. [67] Li, Z.-W., & Ma, W. J. (2021). An uncertainty-based model of the effects of fixation on choice. [68] MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge University Press. [69] McFadden, D. L. (1984). Chapter 24 Econometric analysis of qualitative response models. In Durlauf, S. N. (Ed.), [70] Menard, S. (2000). Coefficients of determination for multiple logistic regression analysis. [71] Meng, X.-L., & Wong, W. H. (1996). Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. [72] Merlise, C., & Edward, I. G. (2004). Model uncertainty. [73] Montague P. R., Dolan R. J., Friston K. J.,& Dayan, P.(2012). Computational psychiatry. [74] Murphy K. P.(2023). Probabilistic machine learning: An introduction. The MIT Press. [75] Myung, I. J., & Pitt, M. A. (1997). Applying Occam’s razor in modeling cognition: A Bayesian approach. [76] Myung, J., & Pitt, M. (2018). Model comparison in psychology. In Wagenmakers, E.J. (Ed.), [77] Palminteri S., Wyart V.,& Koechlin, E.(2017). The importance of falsification in computational cognitive modeling. [78] Pedersen M. L., Ironside M., Amemori K. I., McGrath C. L., Kang M. S., Graybiel A. M., Pizzagalli D. A., & Frank M. J. (2021). Computational phenotyping of brain- behavior dynamics underlying approach-avoidance conflict in major depressive disorder. [79] Raab, H. A., & Hartley, C. A. (2020). Adolescents exhibit reduced Pavlovian biases on instrumental learning. Scientific reports, 10(1), 15770. https://doi.org/10.1038/s41598-020- 72628-w [80] Ratcliff R., Smith P. L., Brown S. D.,& McKoon, G.(2016). Diffusion decision model: Current issues and history. [81] Schultz W., Dayan P., & Montague P. R. (1997). A neural substrate of prediction and reward. [82] Schwarz, G. (1978). Estimating the dimension of a model. [83] Sclove, S. L. (1987). Application of model-selection criteria to some problems in multivariate analysis. [84] Sivula T., Magnusson M., Matamoros A. A.,& Vehtari, A.(2020). Uncertainty in Bayesian leave-one-out cross- validation based model comparison. [85] Spiegelhalter D. J., Best N. G., Carlin B. P., & Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. [86] Spiegelhalter D. J., Best N. G., Carlin B. P., & Van Der Linde, A. (2014). The deviance information criterion: 12 years on. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 76(3), 485-493. http://www. jstor.org/stable/24774528 [87] Steinberg E. E., Keiflin R., Boivin J. R., Witten I. B., Deisseroth K., & Janak P. H. (2013). A causal link between prediction errors, dopamine neurons and learning. [88] Steingroever H., Wetzels R., & Wagenmakers E.-J. (2014). Absolute performance of reinforcement-learning models for the Iowa Gambling Task. [89] Steingroever H., Wetzels R., & Wagenmakers E.-J. (2016). Bayes factors for reinforcement-learning models of the Iowa gambling task. [90] Stephan K. E., Penny W. D., Daunizeau J., Moran R. J.,& Friston, K. J.(2009). Bayesian model selection for group studies. [91] Stone, M. (1977). An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion. [92] Sugiura, N. (1978). Further analysis of the data by akaike's information criterion and the finite corrections: Further analysis of the data by akaike's. [93] Suzuki S., Harasawa N., Ueno K., Gardner J. L., Ichinohe N., Haruno M., Cheng K.,& Nakahara, H.(2012). Learning to simulate others' decisions. [94] Swart J. C., Fröbose M. I., Cook J. L., Geurts D. E., Frank M. J., Cools R., & den Ouden, H. E. (2017). Catecholaminergic challenge uncovers distinct Pavlovian and instrumental mechanisms of motivated (in)action. [95] Vandekerckhove J., Matzke D., & Wagenmakers, E.-J. (2015). Model comparison and the principle of parsimony. In Busemeyer, J. R., Wang, Z., Townsend, J. T., & Eidels, A. (Eds.), The Oxford handbook of computational and mathematical psychology. (pp. 300-319). Oxford University Press. [96] Vandekerckhove J., Tuerlinckx F., & Lee M. D. (2011). Hierarchical diffusion models for two-choice response times. [97] van de Schoot R., Depaoli S., King R., Kramer B., Märtens K., Tadesse M. G.,. Yau C. (2021). Bayesian statistics and modelling. [98] Vehtari, A. (2022). [99] Vehtari A., Gelman A., & Gabry J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. [100] Vehtari A., Mononen T., Tolvanen V., Sivula T., & Winther O. (2016). Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models. The Journal of Machine Learning Research, 17(1), 3581-3618. 2016). Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models. The Journal of Machine Learning Research, 17(1), 3581-3618. http://jmlr.org/papers/v17/14-540.html [101] Vehtari A., Simpson D. P., Yao Y., & Gelman A. (2019). Limitations of “Limitations of Bayesian Leave-one-out Cross-Validation for Model Selection”. Computational Brain & Behavior, 2(1), 22-27. https://doi.org/10.1007/s42113-018-0020-6Verstynen, T., & Kording, K. P. [102] Vrieze, S. I. (2012). Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). [103] Wagenmakers, E.-J., & Farrell, S. (2004). AIC model selection using Akaike weights. [104] Wagenmakers E. J., Lodewyckx T., Kuriyal H.,& Grasman, R.(2010). Bayesian hypothesis testing for psychologists: A tutorial on the Savage-Dickey method. [105] Wasserman, L. (2006). [106] Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of machine learning research, 11(12). http://jmlr.org/papers/v11/watanabe10a. html [107] Westbrook A., van den Bosch R., Määttä J., Hofmans L., Papadopetraki D., Cools R., & Frank M. J. (2020). Dopamine promotes cognitive effort by biasing the benefits versus costs of cognitive work. [108] Wilks, S. S. (1938). The large-sample distribution of the likelihood ratio for testing composite hypotheses. The annals of mathematical statistics, 9(1), 60-62. http://www. jstor.org/stable/2957648 [109] Wilson, R. C., & Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data. [110] Yang, Y. (2005). Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation. [111] Yao Y., Vehtari A., Simpson D., & Gelman A. (2018). Using stacking to average Bayesian predictive distributions (with discussion). [112] Zhang L., Lengersdorff L., Mikus N., Gläscher J., & Lamm C. (2020). Using reinforcement learning models in social neuroscience: Frameworks, pitfalls and suggestions of best practices. [113] Zhang Y.,& Yang, Y.(2015). Cross-validation for selecting a model selection procedure. |
[1] | SHEN Si-Chu, WANG Yao-Min, ZHANG Han-Bing, MA Jia-Tao. Discount or trade off: The psychological mechanisms of intertemporal choice with double-dated mixed outcomes [J]. Advances in Psychological Science, 2023, 31(7): 1121-1132. |
[2] | PENG Yujia, WANG Yuxi, LU Di. The mechanism of emotion processing and intention inference in social anxiety disorder based on biological motion [J]. Advances in Psychological Science, 2023, 31(6): 905-914. |
[3] | OU Jianxin, WU Yin, LIU Jinting, LI Hong. Computational psychiatry: A new perspective on research and clinical applications in depression [J]. Advances in Psychological Science, 2020, 28(1): 111-127. |
[4] | Fang Hou; Yukai Zhao; Luis A. Lesmes; Zhong-Lin Lu. The Shape of the Contrast Sensitivity Function: Invariant across Light Conditions, Varies Across Observers [J]. Advances in Psychological Science, 2016, 24(Suppl.): 37-. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||