心理学报 ›› 2026, Vol. 58 ›› Issue (3): 416-436.doi: 10.3724/SP.J.1041.2026.0416 cstr: 32110.14.2026.0416
周蕾1, 李立统1, 王旭1, 区桦烽1, 胡倩瑜1, 李爱梅2, 古晨妍1
收稿日期:2025-05-12
发布日期:2025-12-26
出版日期:2026-03-25
通讯作者:
李爱梅, E-mail: tliaim@jnu.edu.cn; 古晨妍, E-mail: g_cy1989163@163.com
基金资助:ZHOU Lei1, LI Litong1, WANG Xu1, OU Huafeng1, HU Qianyu1, LI Aimei2, GU Chenyan1
Received:2025-05-12
Online:2025-12-26
Published:2026-03-25
摘要: 风险决策的理论研究主要依赖行为结果的逆向推理和自我报告数据, 缺乏对决策过程的直接观测, 制约了其内在机制解释及有效行为干预方案开发。人工智能大语言模型(LLMs)的运用为克服以上局限提供了途径。本文通过三项研究系统考察了LLMs在风险决策中的模拟潜力, 基于DeepSeek-R1进行单次和多次博弈并生成决策依据, 并运用GPT-4o对其进行归纳性主题分析(ITA), 构建了LLMs生成决策策略文本的技术路径, 并将其用于决策干预。发现: (1) ChatGPT-3.5/4能复现人类单次(更风险规避)与多次(更风险寻求)博弈的典型选择模式; (2) LLMs能分清单次/多次博弈逻辑, 并正确分别运用规范性和描述性理论生成相应策略, 其策略被认可度高; (3) LLMs基于不同策略生成的干预文本能有效影响人们在医疗、金融、内容创作和电商营销情境中固有的风险决策偏好。研究系统验证LLMs对行为偏好的模拟能力, 对决策的理解力, 并构建了基于生成式AI的决策干预新范式, 为人工智能辅助高风险决策提供了理论和实践基础。
中图分类号:
周蕾, 李立统, 王旭, 区桦烽, 胡倩瑜, 李爱梅, 古晨妍. (2026). 能辨“单次-多次博弈”的大语言模型: 理解与干预风险决策. 心理学报, 58(3), 416-436.
ZHOU Lei, LI Litong, WANG Xu, OU Huafeng, HU Qianyu, LI Aimei, GU Chenyan. (2026). Large language models capable of distinguishing between single and repeated gambles: Understanding and intervening in risky choice. Acta Psychologica Sinica, 58(3), 416-436.
| [1] Achiam J., Adler S., Agarwal S., Ahmad L., Akkaya I., Aleman F. L., .. McGrew B. (2023). GPT-4 technical report. arXiv preprint. https://doi.org/10.48550/arXiv.2303.08774 [2] Aher G. V., Arriaga R. I., & Kalai A. T. (2023). Using large language models to simulate multiple humans and replicate human subject studies. In Proceedings of the 40th International Conference on Machine Learning (pp. 337- 371). PMLR. https://proceedings.mlr.press/v202/aher23a.html [3] Altay S., Hacquin A. S., Chevallier C., & Mercier H. (2023). Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. Journal of Experimental Psychology: Applied, 29(1), 52-62. https://doi.org/10.1037/xap0000400 [4] Anderson M. A. B., Cox D. J., & Dallery J. (2023). Effects of economic context and reward amount on delay and probability discounting. Journal of the Experimental Analysis of Behavior, 120(2), 204-213. https://doi.org/10.1002/jeab.868 [5] Argyle L. P., Busby E. C., Fulda N., Gubler J., Rytting C.,& Wingate, D.(2023). Out of one, many: Using language models to simulate human samples. Political Analysis, 312023.2 [6] Arora C., Sayeed A. I., Licorish S., Wang F., & Treude C. (2024). Optimizing large language model hyperparameters for code generation. arXiv preprint. https://doi.org/10.48550/arXiv.2408.10577 [7] Barberis N.,& Huang, M.(2009). Preferences with frames: A new utility specification that allows for the framing of risks. Journal of Economic Dynamics and Control, 332009.01.009 [8] Benartzi, S., & Thaler, R. H. (1999). Risk aversion or myopia? Choices in repeated gambles and retirement investments. Management Science, 45(3), 364-381. https://doi.org/10.1287/mnsc.45.3.364 [9] Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120. https://doi.org/10.1073/pnas.2218523120 [10] Brandstätter E., Gigerenzer G., & Hertwig R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113(2), 409-432. https://doi.org/10.1037/0033-295X.113.2.409 [11] Brislin, R. W. (1986). The wording and translation of research instruments. In W. J. Lonner & J. W. Berry (Eds.), Field methods in cross-cultural research(pp. 137-164). Sage Publications. [12] Brown T., Mann B., Ryder N., Subbiah M., Kaplan J. D., Dhariwal P.,… Amodei, D.(2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. https://doi.org/10.48550/arXiv.2005.14165 [13] Carvalho T., Negm H., & El-Geneidy A. (2024). A comparison of the results from artificial intelligence-based and human-based transport-related thematic analysis. Findings. https://doi.org/10.32866/001c.94401 [14] Chen Y., Liu T. X., Shan Y., & Zhong S. (2023). The emergence of economic rationality of GPT. Proceedings of the National Academy of Sciences, 120(51), e2316205120. https://doi.org/10.1073/pnas.2316205120 [15] Choi S., Kang H., Kim N., & Kim J. (2025). How does artificial intelligence improve human decision-making? Evidence from the AI-powered Go program. Strategic Management Journal, 46(6), 1523-1554. https://doi.org/10.1002/smj.3694 [16] Christensen, R. H. B. (2023). ordinal: Regression models for ordinal data (R package version 2023.12-4.1)[Computer software]. https://CRAN.R-project.org/package=ordinal [17] Coda-Forno J., Witte K., Jagadish A. K., Binz M., Akata Z., & Schulz E. (2023). Inducing anxiety in large language models can induce bias. arXiv preprint. https://doi.org/10.48550/arXiv.2304.11111 [18] Dai S. C., Xiong A., & Ku L. W. (2023). LLM-in-the-loop: Leveraging large language model for thematic analysis. arXiv preprint. https://doi.org/10.48550/arXiv.2310.15100 [19] de Kok, T. (2025). ChatGPT for textual analysis? How to use generative LLMs in accounting research. Management Science, 71(9), 7888-7906. https://doi.org/10.1287/mnsc.2023.03253 [20] de Varda A. G., Saponaro C., & Marelli M. (2025). High variability in LLMs’ analogical reasoning. Nature Human Behaviour, 9(7), 1339-1341. https://doi.org/10.1038/s41562-025-02224-3 [21] DeepSeek-AI, Guo D., Yang D., Zhang H., Song J., Zhang R., … Zhang Z. (2025). Deepseek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint. https://doi.org/10.48550/arXiv.2501.12948 [22] Deiana G., Dettori M., Arghittu A., Azara A., Gabutti G., & Castiglia P. (2023). Artificial intelligence and public health: Evaluating ChatGPT responses to vaccination myths and misconceptions. Vaccines, 11(7), 1217. https://doi.org/10.3390/vaccines11071217 [23] Deiner M. S., Honcharov V., Li J., Mackey T. K., Porco T. C., & Sarkar U. (2024). Large language models can enable inductive thematic analysis of a social media corpus in a single prompt: Human validation study. JMIR Infodemiology, 4(1), e59641. https://doi.org/10.2196/59641 [24] Demszky D., Yang D., Yeager D. S., Bryan C. J., Clapper M., Chandhok S., .. Pennebaker J. W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688-701. https://doi.org/10.1038/s44159-023-00241-5 [25] Dillion D., Tandon N., Gu Y.,& Gray, K.(2023). Can AI language models replace human participants? Trends in Cognitive Sciences, 272023.04.008 [26] Ding Y., Zhang L. L., Zhang C., Xu Y., Shang N., Xu J., Yang F., & Yang M. (2024). Longrope: Extending LLM context window beyond 2 million tokens. arXiv preprint. https://doi.org/10.48550/arXiv.2402.13753 [27] Faul F., Erdfelder E., Lang A. G., & Buchner A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences.Behavior Research Methods, 39(2), 175-191. [28] Ferguson S. A., Aoyagui P. A., & Kuzminykh A. (2023). Something borrowed: Exploring the influence of AI-generated explanation text on the composition of human explanations. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(pp. 1-7). ACM. https://doi.org/10.1145/3544549.3585727 [29] Goli A.,& Singh, A.(2024). Frontiers: Can large language models capture human preferences? Marketing Science, 432023.0306 [30] Grossmann I., Feinberg M., Parker D. C., Christakis N. A., Tetlock P. E., & Cunningham W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109. https://doi.org/10.1126/science.adi1778 [31] Gupta R., Nair K., Mishra M., Ibrahim B.,& Bhardwaj, S.(2024). Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. International Journal of Information Management Data Insights, 4 2024.100232 [32] Hagendorff T., Fabi S., & Kosinski M. (2023). Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science, 3(10), 833-838. https://doi.org/10.1038/s43588-023-00527-x [33] Hebenstreit K., Praas R., Kiesewetter L. P., & Samwald M. (2024). A comparison of chain-of-thought reasoning strategies across datasets and models. PeerJ Computer Science, 10, e1999. https://doi.org/10.7717/peerj-cs.1999 [34] Hertwig R.,& Erev, I.(2009). The description-experience gap in risky choice. Trends in Cognitive Sciences, 132009.09.004 [35] Jiao L., Li C., Chen Z., Xu H., & Xu Y. (2025). When AI “possesses” personality: Roles of good and evil personalities influence moral judgment in large language models.Acta Psychologica Sinica, 57(6), 929-946. [焦丽颖, 李昌锦, 陈圳, 许恒彬, 许燕. (2025). 当AI“具有”人格: 善恶人格角色对大语言模型道德判断的影响.心理学报, 57(6), 929-946.] [36] Jin H. J.,& Han, D. H.(2014). Interaction between message framing and consumers’ prior subjective knowledge regarding food safety issues. Food Policy, 44, 95-102. https://doi.org/10.1016/j.foodpol.2013.10.007 [37] Jones, E., & Steinhardt, J. (2022). Capturing failures of large language models via human cognitive biases. Advances in Neural Information Processing Systems, 35, 11785-11799. https://doi.org/10.48550/arxiv.2202.12299 [38] Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-292. https://doi.org/10.2307/1914185 [39] Karinshak E., Hu A., Kong K., Rao V., Wang J., Wang J., & Zeng Y. (2024). LLM-globe: A benchmark evaluating the cultural values embedded in LLM output. arXiv preprint. https://doi.org/10.48550/arXiv.2411.06032 [40] Karinshak E., Liu S. X., Park J. S., & Hancock J. T. (2023). Working with AI to persuade: Examining a large language model's ability to generate pro-vaccination messages. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-29. https://doi.org/10.1145/3579592 [41] Katz A., Fleming G. C., & Main J. (2024). Thematic analysis with open-source generative AI and machine learning: A new method for inductive qualitative codebook development. arXiv preprint. https://doi.org/10.48550/arXiv.2410.03721 [42] Kelton A. S., Pennington R. R., & Tuttle B. M. (2010). The effects of information presentation format on judgment and decision making: A review of the information systems research. Journal of Information Systems, 24(2), 79-105. https://doi.org/10.2308/jis.2010.24.2.79 [43] Khalid, M. T., & Witmer, A. P. (2025). Prompt engineering for large language model-assisted inductive thematic analysis. arXiv preprint. https://doi.org/10.48550/arXiv.2503.22978 [44] Kumar, A., & Lim, S. S. (2008). How do decision frames influence the stock investment choices of individual investors? Management Science, 54(6), 1052-1064. https://doi.org/10.1287/mnsc.1070.0845 [45] Lehr S. A., Caliskan A., Liyanage S., & Banaji M. R. (2024). ChatGPT as research scientist: Probing GPT’s capabilities as a research librarian, research ethicist, data generator, and data predictor. Proceedings of the National Academy of Sciences, 121(35), e2404328121. https://doi.org/10.1073/pnas.2404328121 [46] Lenth, R. V. (2025). Emmeans: Estimated marginal means, aka least-squares means (R package version 1.11.0) [Computer software]. https://doi.org/10.32614/CRAN.package.emmeans [47] Li, S. (2004). A behavioral choice model when computational ability matters. Applied Intelligence, 20(2), 147-163. https://doi.org/10.1023/B:APIN.0000013337.01711.c7 [48] Lim S.,& Schmälzle, R.(2024). The effect of source disclosure on evaluation of AI-generated messages. Computers in Human Behavior: Artificial Humans, 22024.100058 [49] Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. Royal Society Open Science, 10(8), 230658. https://doi.org/10.1098/rsos.230658 [50] Lin, Z. (2024). How to write effective prompts for large language models. Nature Human Behaviour, 8(4), 611-615. https://doi.org/10.1038/s41562-024-01847-2 [51] Lin, Z. (2025). Techniques for supercharging academic writing with generative AI. Nature Biomedical Engineering, 9(4), 426-431. https://doi.org/10.1038/s41551-024-01185-8 [52] Liu N., Zhou L., Li A. M., Hui Q. S., Zhou Y. R.,& Zhang, Y. Y.(2021). Neuroticism and risk-taking: the role of competition with a former winner or loser. Personality and Individual Differences, 179, 110917. https://doi.org/10.1016/j.paid.2021.110917 [53] Liu S. X., Yang J. Z.,& Chu, H. R.(2019). Now or future? Analyzing the effects of message frame and format in motivating Chinese females to get HPV vaccines for their children. Patient Education and Counseling, 1022018.09.005 [54] Lopes L. L.(1996). When time is of the essence: Averaging, aspiration, and the short run. Organizational Behavior and Human Decision Processes, 65(3), 179-189. https://doi.org/10.1006/obhd.1996.0017 [55] Lu J., Chen Y.,& Fang, Q.(2022). Promoting decision satisfaction: The effect of the decision target and strategy on process satisfaction. Journal of Business Research, 139, 1231-1239. https://doi.org/10.1016/j.jbusres.2021.10.056 [56] Mei Q., Xie Y., Yuan W., & Jackson M. O. (2024). A turing test of whether AI chatbots are behaviorally similar to humans. Proceedings of the National Academy of Sciences, 121(9), e2313925121. https://doi.org/10.1073/pnas.2313925121 [57] Mischler G., Li Y. A., Bickel S., Mehta A. D., & Mesgarani N. (2024). Contextual feature extraction hierarchies converge in large language models and the brain. Nature Machine Intelligence, 6(10), 1467-1477. https://doi.org/10.1038/s42256-024-00925-4 [58] Morreale A., Stoklasa J., Collan M., & Lo Nigro G. (2018). Uncertain outcome presentations bias decisions: Experimental evidence from Finland and Italy. Annals of Operations Research, 268(1-2), 259-272. https://doi.org/10.1007/s10479-016-2349-3 [59] Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716 [60] Park, P. S. (2024). Diminished diversity-of-thought in a standard large language model. Behavior Research Methods, 56(6), 5754-5770. https://doi.org/10.3758/s13428-023-02307-x [61] Pascal, B. (1670). Pensées (W. F. Trotter, Trans.). Retrieved Nov. 22, 2018, from https://sourcebooks.fordham.edu/mod/1660pascal-pensees.asp [62] Pavey, L., & Churchill, S. (2014). Promoting the avoidance of high-calorie snacks: Priming autonomy moderates message framing effects. Plos One, 9(7), e103892. https://doi.org/10.1371/journal.pone.0103892 [63] Pawel S., Consonni G., & Held L. (2023). Bayesian approaches to designing replication studies. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000604 [64] Peng L., Guo Y., & Hu D. (2021). Information framing effect on public’s intention to receive the COVID-19 vaccination in China. Vaccines, 9(9), 995. https://doi.org/10.3390/vaccines9090995 [65] Peters, E., & Levin, I. P. (2008). Dissecting the risky-choice framing effect: Numeracy as an individual-difference factor in weighting risky and riskless options. Judgment and Decision Making, 3(6), 435-448. https://doi.org/10.1017/s1930297500000012 [66] Popovic N. F., Pachur T., & Gaissmaier W. (2019). The gap between medical and monetary choices under risk persists in decisions for others. Journal of Behavioral Decision Making, 32(4), 388-402. https://doi.org/10.1002/bdm.2121 [67] Prescott M. R., Yeager S., Ham L., Saldana C. D. R., Serrano V., Narez J., … Montoya J. (2024). Comparing the efficacy and efficiency of human and generative AI: Qualitative thematic analyses. JMIR AI, 3(1), e54482. https://doi.org/10.2196/54482 [68] Qin X., Huang M., & Ding J. (2024). AITurk: Using ChatGPT for social science research. PsyArXiv. https://doi.org/10.31234/osf.io/xkd23 [69] Redelmeier, D. A., & Tversky, A. (1992). On the framing of multiple prospects. Psychological Science, 3(3), 191-193. https://doi.org/10.1111/j.1467-9280.1992.tb00025.x [70] Reeck C., Mullette-Gillman O. A., McLaurin R. E., & Huettel S. A. (2022). Beyond money: Risk preferences across both economic and non-economic contexts predict financial decisions. Plos One, 17(12), e0279125. https://doi.org/10.1371/journal.pone.0279125 [71] Salles A., Evers K.,& Farisco, M.(2020). Anthropomorphism in AI. AJOB Neuroscience, 112020.1740350 [72] Samuelson, P. A. (1963). Risk and uncertainty: A fallacy of large numbers.Scientia, 98, 108-113. [73] Scarffe A., Coates A., Brand K., & Michalowski W. (2024). Decision threshold models in medical decision making: A scoping literature review. BMC Medical Informatics and Decision Making, 24(1), 273. https://doi.org/10.1186/s12911-024-02681-2 [74] Shahid N., Rappon T., & Berta W. (2019). Applications of artificial neural networks in health care organizational decision-making: A scoping review. Plos One, 14(2), e0212356. https://doi.org/10.1371/journal.pone.0212356 [75] Simonsohn, U. (2015). Small telescopes: Detectability and the evaluation of replication results. Psychological Science, 26(5), 559-569. https://doi.org/10.1177/0956797614567341 [76] Strachan J. W. A., Albergo D., Borghini G., Pansardi O., Scaliti E., Gupta S., … Becchio C. (2024). Testing theory of mind in large language models and humans. Nature Human Behaviour, 8(7), 1285-1295. https://doi.org/10.1038/s41562-024-01882-z [77] Sun H. Y., Rao L. L., Zhou K.,& Li, S.(2014). Formulating an emergency plan based on expectation-maximization is one thing, but applying it to a single case is another. Journal of Risk Research, 172013.816333 [78] Suri G., Slater L. R., Ziaee A., & Nguyen M. (2024). Do large language models show decision heuristics similar to humans? A case study using GPT-3.5. Journal of Experimental Psychology: General, 153(4), 1066-1075. https://doi.org/10.1037/xge0001547 [79] Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Allyn & Bacon. [80] Thapa, S., & Adhikari, S. (2023). ChatGPT, Bard, and large language models for biomedical research: Opportunities and pitfalls. Annals of Biomedical Engineering, 51(12), 2647-2651. https://doi.org/10.1007/s10439-023-03284-0 [81] Tversky, A., & Bar-Hillel, M. (1983). Risk: The long and the short. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(4), 713-717. https://doi.org/10.1037/0278-7393.9.4.713 [82] Von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior (2nd rev. ed.). Princeton University Press.. [83] Wang Y., Zhang J., Wang F., Xu W.,& Liu, W.(2023). Do not think any virtue trivial, and thus neglect it: Serial mediating role of social mindfulness and perspective taking. Acta Psychologica Sinica, 552023.00626 [王伊萌, 张敬敏, 汪凤炎, 许文涛, 刘维婷.(2023). 勿以善小而不为: 正念与智慧——社会善念与观点采择的链式中介. 心理学报, 55(4), 626-641. https://doi.org/10.3724/SP.J.1041.2023.00626] [84] Webb T., Holyoak K. J., & Lu H. (2023). Emergent analogical reasoning in large language models. Nature Human Behaviour, 7(9), 1526-1541. https://doi.org/10.48550/arXiv.2212.09196 [85] Weber E. U., Blais A.-R., & Betz N. E. (2002). A domain- specific risk-attitude scale: Measuring risk perceptions and risk behaviors. Journal of Behavioral Decision Making, 15(4), 263-290. https://doi.org/10.1002/bdm.414 [86] Wei J., Wang X., Schuurmans D., Bosma M., Ichter B., Xia F., .. Zhou D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837. https://doi.org/10.48550/arxiv.2201.11903 [87] Xia D., Li Y., He Y., Zhang T., Wang Y.,& Gu, J.(2019). Exploring the role of cultural individualism and collectivism on public acceptance of nuclear energy. Energy Policy, 132, 208-215. https://doi.org/10.1016/j.enpol.2019.05.014 [88] Xia D., Song M.,& Zhu, T.(2025). A comparison of the persuasiveness of human and ChatGPT generated pro- vaccine messages for HPV. Frontiers in Public Health, 12, 1515871. https://doi.org/10.3389/fpubh.2024.1515871 [89] Yuan Y., Jiao W., Wang W., Huang J. T., He P., Shi S., & Tu Z. (2023). Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. arXiv preprint. https://doi.org/10.48550/arXiv.2308.06463 [90] Zhang J., Li H. A.,& Allenby, G. M.(2024). Using text analysis in parallel mediation analysis. Marketing Science, 43(5), 953-970. https://doi.org/10.1287/mksc.2023.0045 [91] Zhang Y., Huang F., Mo L., Liu X., & Zhu T. (2025). Suicidal ideation data augmentation and recognition technology based on large language models.Acta Psychologica Sinica, 57(6), 987-1000. [章彦博, 黄峰, 莫柳铃, 刘晓倩, 朱廷劭. (2025). 基于大语言模型的自杀意念文本数据增强与识别技术.心理学报, 57(6), 987-1000.] [92] Zhao F., Yu F.,& Shang, Y.(2024). A new method supporting qualitative data analysis through prompt generation for inductive coding. 2024 IEEE International Conference on Information Reuse and Integration for Data Science (IRI), 164-169. https://doi.org/10.1109/IRI62200.2024.00043 [93] Zhao W. X., Zhou K., Li J., Tang T., Wang X., Hou Y., .. Wen J. R. (2023). A survey of large language models. arXiv preprint. https://doi.org/10.48550/arXiv.2303.18223 |
| [1] | 王健树, 姜啸威, 陈亚楠, 王明辉, 杜峰. 从显性威慑到隐性内化:AI监管和黑暗三联征人格对诚实行为的影响[J]. 心理学报, 2026, 58(3): 381-398. |
| [2] | 戴逸清, 马歆茗, 伍珍. 大语言模型放大共情性别刻板印象:对专业与职业推荐的影响[J]. 心理学报, 2026, 58(3): 399-415. |
| [3] | 朱娜平, 张霞, 周杰, 李燕芳. 群体合作规范违反情境中儿童第三方干预偏好的发展及内在动机[J]. 心理学报, 2026, 58(3): 516-533. |
| [4] | 杨沈龙, 胡小勇, 郭永玉. 经济处境的心理影响及其干预策略与治理启示(专栏总论)[J]. 心理学报, 2026, 58(2): 191-197. |
| [5] | 吴诗玉, 王亦赟. “零样本语言学习”:大语言模型能“像人一样”习得语境中的情感吗?[J]. 心理学报, 2026, 58(2): 308-322. |
| [6] | 王雪珂, 邓芳, 陈立, 冯廷勇. 孤独症儿童负性情绪调节特征及干预:基于多模态评估的正念与认知策略训练[J]. 心理学报, 2026, 58(1): 39-56. |
| [7] | 焦丽颖, 李昌锦, 陈圳, 许恒彬, 许燕. 当AI“具有”人格:善恶人格角色对大语言模型道德判断的影响[J]. 心理学报, 2025, 57(6): 929-946. |
| [8] | 高承海, 党宝宝, 王冰洁, 吴胜涛. 人工智能的语言优势和不足:基于大语言模型与真实学生语文能力的比较[J]. 心理学报, 2025, 57(6): 947-966. |
| [9] | 章彦博, 黄峰, 莫柳铃, 刘晓倩, 朱廷劭. 基于大语言模型的自杀意念文本数据增强与识别技术[J]. 心理学报, 2025, 57(6): 987-1000. |
| [10] | 李春好, 刘荣媛, 刘远豪. 经典和对偶共结果效应对前景集结果区间的依赖性:基于概率权重的视角[J]. 心理学报, 2025, 57(3): 398-414. |
| [11] | 耿晓伟, 刘超, 苏黎, 韩冰雪, 张巧明, 吴明证. 人机合作使人更冒险: 主体责任感的中介作用[J]. 心理学报, 2025, 57(11): 1885-1900. |
| [12] | 由姗姗, 齐玥, 陈俊廷, 骆磊, 张侃. 人与AI对智能家居机器人的安全信任及其影响因素[J]. 心理学报, 2025, 57(11): 1951-1972. |
| [13] | 周子森, 黄琪, 谭泽宏, 刘睿, 曹子亨, 母芳蔓, 樊亚春, 秦绍正. 多模态大语言模型动态社会互动情景下的情感能力测评[J]. 心理学报, 2025, 57(11): 1988-2000. |
| [14] | 黄峰, 丁慧敏, 李思嘉, 韩诺, 狄雅政, 刘晓倩, 赵楠, 李林妍, 朱廷劭. 基于大语言模型的自助式AI心理咨询系统构建及其效果评估[J]. 心理学报, 2025, 57(11): 2022-2042. |
| [15] | 武月婷, 王博, 包寒吴霜, 李若男, 吴怡, 王嘉琪, 程诚, 杨丽. 人类对大语言模型的热情和能力感知[J]. 心理学报, 2025, 57(11): 2043-2059. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||