ISSN 1671-3710
CN 11-4766/R
主办:中国科学院心理研究所
出版:科学出版社

Advances in Psychological Science ›› 2026, Vol. 34 ›› Issue (6): 1084-1096.doi: 10.3724/SP.J.1042.2026.1084

• Regular Articles • Previous Articles     Next Articles

The moral impact of delegating to artificial intelligence

TANG Wei1, ZHONG Wenrui2, LEI Zhen2, ZHANG Dandan2,3   

  1. 1Institute of Xi Jinping's Economic Thought, Southwestern University of Finance and Economics, Chengdu 611130, China;
    2China Center for Behavioral Economics and Finance, Southwestern University of Finance and Economics, Chengdu 611130, China;
    3Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China
  • Received:2026-02-13 Online:2026-06-15 Published:2026-04-17

Abstract: Artificial intelligence (AI) is increasingly deployed as an agent that executes decisions on behalf of human decision-makers. When decision authority is separated from execution authority, both moral psychology and social accountability can shift, making unethical decisions easier to initiate and harder to sanction. Despite rapid growth in this literature, two gaps remain. First, although delegation to agents (e.g., human subordinates or rule-based algorithms) is known to affect moral decision-making, it is unclear how these mechanisms change when the agent is an AI system. Second, research lacks a systematic, delegation-based account of how AI agents shape unethical behavior. Much work concentrates on the moral properties of AI itself (e.g., ethical compliance or capacities), while paying less attention to how AI, as an executing agent, alters human moral choices. Related studies also tend to examine isolated vantage points—such as decision-makers’ perceptions of consequences or affected parties’ moral evaluations—without integrating decision-makers, agents, and evaluators into a single framework.
To address these gaps, this article develops a “decision-maker-agent-evaluator” framework for moral decision-making and accountability grounded in delegation theory, and uses it to synthesize and reorganize roughly two decades of empirical and theoretical research. Moral outcomes are treated as jointly produced by three roles: (i) the decision-maker, who issues a directive with ethical consequences and anticipates outcomes; (ii) the agent, who implements the directive (human, rule-based algorithmic, or AI); and (iii) the evaluator—affected parties and third-party observers—who evaluates the act, infers intent, assigns responsibility, and may sanction.
Within this framework, the article identifies two pathways through which agents can promote unethical behavior. The first is a decision-chain pathway originating from the decision-maker. Delegation increases temporal, spatial, hierarchical, and procedural distance between decision-makers and those affected, making consequences less salient and facilitating moral disengagement. Delegation also expands decision-makers’ room for moral ambivalence, making it easier to justify unethical behavior. Finally, delegation can allow decision-makers to pursue benefits while preserving their moral self-image. The second is a feedback-chain pathway originating from evaluators. When actions are carried out through an agent, evaluators may struggle to pinpoint the actual decision-maker and infer intent. At the same time, the agent becomes an additional target for attribution, shifting and dispersing blame and responsibility. This weakens anticipated blame and punishment along the feedback chain, indirectly increasing the likelihood of unethical behavior.
A further contribution of this article is to specify AI-specific effects and show how they operate along both pathways. On the decision-chain pathway, AI’s high compliance raises execution reliability and can reduce perceived exposure risk. Its learning capability and black-box opacity reduce traceability and blur the input-output reasoning chain, making intent and responsibility easier to deny by invoking unforeseeability or lack of control. In addition, low-cost replication and cross-context personalization allow AI agents to diffuse and amplify what would otherwise be localized unethical practices, increasing frequency, reach, and the difficulty of timely detection—thus expanding potential returns. On the feedback-chain pathway, the relative novelty of AI agents can foster greater tolerance, and AI mediation can further cloud judgments of decision-maker intention, increasing both the incidence and intensity of unethical behavior.
The article concludes with three directions for future research and governance: (1) test the sequencing, interaction, and relative importance of mechanisms within the framework and identify boundary conditions under which AI-enabled delegation may yield moral enhancement rather than erosion; (2) examine diffusion dynamics beyond the framework—imitation, social transmission, and organizational amplification—through which AI-mediated unethical practices spread and become normalized; and (3) develop and evaluate human-AI collaborative governance strategies, specifying where interventions should enter (decision, delegation, or feedback), in what order, and how responsibilities should be allocated between human oversight and AI-based controls.

Key words: artificial intelligence delegation, moral decision-making, human-AI collaborative governance