ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2026, Vol. 58 ›› Issue (3): 416-436.doi: 10.3724/SP.J.1041.2026.0416

Previous Articles     Next Articles

Large language models capable of distinguishing between single and repeated gambles: Understanding and intervening in risky choice

ZHOU Lei1, LI Litong1, WANG Xu1, OU Huafeng1, HU Qianyu1, LI Aimei2, GU Chenyan1   

  1. 1School of Management, Guangdong University of Technology, Guangzhou 510520, China;
    2School of Management, Jinan University, Guangzhou 510632, China
  • Received:2025-05-12 Published:2026-03-25 Online:2025-12-26

Abstract: Risky choice (RC) is a common and important form of decision making in daily life. Its theoretical development primarily follows two major theories: normative theory and descriptive theory. The paradigms of single- and repeated-play gambles can provide an effective framework for distinguishing between the theories. However, prior research lacks direct observations of the decision-making process, which can limit the deep understanding of individual behaviour and hinder the development of effective behavioural interventions. In recent years, large language models (LLMs) have demonstrated highly human-like characteristics by not only simulating human preferences in behavioural performance but also exhibiting similar reasoning pathways. This offers a promising solution to the aforementioned limitations. This study, which is grounded in the classic RC paradigms of single versus repeated gambles, investigates the capability of LLMs to simulate and understand risk preferences and decision-making processes. Specifically, this study explores the potential of LLMs’ understanding of decision strategies to generate intervention texts and evaluates their effectiveness in influencing human decisions.
This work comprises three studies. In Study 1, GPT-3.5 and GPT-4 were employed to simulate human responses to gambling decisions under nine probability conditions (with constant expected value), which generated a total of 3, 600 responses across single and repeated gamble scenarios. In Study 2, LLM-generated strategies were constructed through a three-stage process (decision rationale extraction, strategy generation and quality evaluation), then the human participants were required to complete decision-making tasks in two experiments: Experiment 1 replicated the medical/financial scenarios (N = 349, N male = 174, M age = 21.79) of Sun et al. (2014) in a 2 (context: medical vs. financial) × 2 (application frequency: single vs. repeated) within-subjects design, and Experiment 2 examined digital contexts with a 2 (context: content creation vs. e-commerce marketing) × 2 (frequency: single vs. repeated) mixed design (context as between subjects). Subsequently, DeepSeek-R1 was used to perform the same tasks and generate strategy texts through the three-stage process. Finally, the participants were instructed to evaluate their acceptance of the LLM-generated strategies. Study 3 extended the Study 2 methodology to determine whether the LLM-generated intervention texts could reverse the participants’ classic choice preference across the single versus repeated gamble scenarios. The Study 2 experimental contexts (Experiment 1: medical vs. financial, N = 460, N male = 205, M age = 21.80; Experiment 2: content creation vs. e-commerce marketing, N = 240, N male = 106, M age = 29.12) were mirrored in Study 3, in which strategically designed intervention texts were presented during the decision-making tasks to test their capacity to modify the participants’ inherent risk preference between the single and repeated gamble conditions and evaluate the persuasive efficacy of LLM-generated strategies on human decision biases.
Study 1 shows that the LLMs (GPT-3.5 and GPT-4) can successfully replicate the typical human pattern of risk aversion in single-play scenarios and risk seeking in repeated-play scenarios, though both models demonstrated an overall stronger tendency toward risk seeking compared with the human participants. Study 2 demonstrates that the human participants preferred low-EV certain options in single-play contexts and high-EV risky options in repeated-play contexts in both experiments. The participants also showed high agreement with the strategies generated by the LLMs in different scenarios. Study 3 confirms that the LLM-generated intervention texts can significantly influence the participants’ choice tendency in all four scenarios, with strong intervention effects observed in the single-play contexts. The LLM intervention strategies are characterised by reliance on expected value computations (normative) when promoting RCs and emphasis on certainty and robustness (descriptive) when promoting safe choices.
In summary, this study demonstrates that (1) LLMs can effectively simulate context-dependent human preferences in RC, particularly the shift from risk aversion in single plays to risk seeking in repeated plays; (2) LLMs can distinguish between the logic underlying single and repeated gambles and apply normative and descriptive reasoning accordingly to externalise decision strategies; and (3) the decision strategies extracted from LLM-generated reasoning can be used to construct effective intervention texts that can alter human preferences in classic risk decision tasks, thereby validating the feasibility and effectiveness of an LLM-based cognitive intervention pathway. This study offers a new technological paradigm for AI-assisted decision intervention and expands the application boundary of LLMs to human cognitive process modelling and regulation.

Key words: risk decision-making, single- vs. repeated-play gambles, large language models, decision strategy, intervention

CLC Number: