心理科学进展 ›› 2026, Vol. 34 ›› Issue (6): 1084-1096.doi: 10.3724/SP.J.1042.2026.1084 cstr: 32111.14.2026.1084
唐伟1, 钟文瑞2, 雷震2, 张丹丹2,3
收稿日期:2026-02-13
出版日期:2026-06-15
发布日期:2026-04-17
基金资助:TANG Wei1, ZHONG Wenrui2, LEI Zhen2, ZHANG Dandan2,3
Received:2026-02-13
Online:2026-06-15
Published:2026-04-17
摘要: 各类人工智能系统正作为代理广泛嵌入企业、政府与个人的决策流程, 对人类决策的道德性和道德判断产生了深刻影响。尽管相关实证和理论研究快速增长, 现有文献尚缺乏对人工智能代理区别于其他代理特殊性的系统性分析, 也缺乏统一的分析框架来系统刻画人工智能代理对道德决策的影响路径。因此本文构建了“决策者-代理-反馈者”的决策与归责框架, 对现有研究进行梳理和重组。本文认为:代理介入决策后, 拉长了决策者的决策链条和反馈者(包含受影响者和第三方观察者)的反馈链条, 由此削弱了决策者的道德感知和反馈者的归责, 进而促进了决策者的不道德行为。而人工智能代理的黑箱性、高遵从、规模化、工具性等特点则在决策链上加剧了不道德指令的执行、增强了决策者的可否认机会、扩大了不道德行为的影响范围; 同时, 这些特性在反馈链上增加了反馈者对不道德行为的道德容忍、模糊了反馈者对决策者意图的判断与归责, 进而促进决策者做出不道德行为。本文指出未来研究有必要继续完善该决策框架内各机制相对作用关系, 考察在组织与社会层面道德行为的扩散与放大机制, 并探索人机协作情境下的治理工具与制度安排。
唐伟, 钟文瑞, 雷震, 张丹丹. (2026). 人工智能代理对道德决策的影响. 心理科学进展 , 34(6), 1084-1096.
TANG Wei, ZHONG Wenrui, LEI Zhen, ZHANG Dandan. (2026). The moral impact of delegating to artificial intelligence. Advances in Psychological Science, 34(6), 1084-1096.
| [1] 杜传晨, 郑远霞, 郭倩倩, 刘国雄. (2025). 大语言模型的人工心理理论: 证据、界定与挑战. 心理科学进展,33(12), 2027-2042. [2] 蒋路远, 曹李梅, 秦昕, 谭玲, 陈晨, 彭小斐. (2022). 人工智能决策的公平感知. 心理科学进展,30(5), 1078-1092. [3] 谭美丽, 殷向洲, 张光磊, 熊普臻. (2025). 工作场所人工智能角色划分: 对员工心理与行为的影响及应对策略.心理科学进展, 33(6), 933-947. [4] 王涛, 占小军, 余薇. (2024). AI感知对员工心理和行为的影响及理论解释.心理科学进展, 32(7), 1195-1208. [5] 许丽颖, 喻丰, 彭凯平. (2022). 算法歧视比人类歧视引起更少道德惩罚欲.心理学报, 54(9), 1076-1092. [6] Abeler J., Nosenzo D., & Raymond C. (2019). Preferences for truth-telling.Econometrica, 87(4), 1115-1153. [7] Agrawal A., Gans J. S., & Goldfarb A. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment.Information Economics and Policy, 47, 1-6. [8] Alnattah A., Jajroudi M., Fadafen S. A. N., Manzari M. N., & Eslami S. (2025). Artificial intelligence in clinical decision-making: A scoping review of rule-based systems and their applications in medicine.Cureus, 17(8), e91333. [9] Alt, M., & Gallier, C. (2022). Incentives and intertemporal behavioral spillovers: A two-period experiment on charitable giving.Journal of Economic Behavior & Organization, 200, 959-972. [10] Andreoni, J. (1990). Impure altruism and donations to public goods: A theory of warm-glow giving.The Economic Journal, 100(401), 464-477. [11] Babšek M., Ravšelj D., Umek L., & Aristovnik A. (2025). Artificial intelligence adoption in public administration: An overview of top-cited articles and practical applications.AI, 6(3), 44. [12] Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3(3), 193-209. [13] Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work.Journal of Business Ethics, 185(4), 725-740. [14] Bartling, B., & Fischbacher, U. (2012). Shifting the blame: On delegation and responsibility.Review of Economic Studies, 79(1), 67-87. [15] Babic B., Gerke S., Evgeniou T., & Cohen I. G. (2021). Beware explanations from AI in health care.Science, 373(6552), 284-286. [16] Bazerman, M. H., & Sezer, O. (2016). Bounded awareness: Implications for ethical decision making.Organizational Behavior and Human Decision Processes, 136, 95-105. [17] Becker, G. S. (1974). A theory of social interactions.Journal of Political Economy, 82(6), 1063-1093. [18] Bednar J. S., Sommerfeldt R. D., Zimbelman A. F., & Zimbelman M. F. (2025). Don’t sweat the small stuff: The tolerance spillover effect in ethical decision-making.Journal of Business Ethics, 202, 727-747. [19] Bénabou, R., & Tirole, J. (2006). Incentives and prosocial behavior.American Economic Review, 96(5), 1652-1678. [20] Bender E. M., Gebru T., McMillan-Major A., & Shmitchell S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency(pp. 610-623). Association for Computing Machinery. [21] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.Cognition, 181, 21-34. [22] Bigman Y. E., Wilson D., Arnestad M. N., Waytz A., & Gray K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), Article 4. [23] Bommasani R., Hudson D. A., Adeli E., Altman R., Arora S., von Arx S., … Liang P. (2021). On the opportunities and risks of foundation models. Stanford Institute for Human-Centered Artificial Intelligence. [24] Bolton, G. E., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity, and competition.American Economic Review, 90(1), 166-193. [25] Bozdag E. (2013). Bias in algorithmic filtering and personalization.Ethics and Information Technology, 15(3), 209-227. [26] Butler, O. (2025). Algorithmic decision-making, delegation and the modern machinery of government. Oxford Journal of Legal Studies, 45(3), 727-752. [27] Cadario R., Longoni C., & Morewedge C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636-1642. [28] Caldwell M., Andrews J. T., Tanay T., & Griffin L. D. (2020). AI-enabled future crime.Crime Science, 9(1), 1-13. [29] Calvano E., Calzolari G., Denicolò V., Harrington J. E., Jr., & Pastorello S. (2020). Protecting consumers from collusive prices due to AI.Science, 370(6520), 1040-1042. [30] Candrian, C., & Scherer, A. (2022). Rise of the machines: Delegating decisions to autonomous AI. Computers in Human Behavior, 134, Article 107308. [31] Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), Article 20180080. [32] Chevrier, M., & Teixeira, V. (2024). Algorithm delegation and responsibility: Shifting blame to the programmer? (Working paper No. 2024-04). Groupe de REcherche en Droit, Economie, Gestion (GREDEG CNRS), Université Côte d'Azur. [33] Coffman, L. C. (2011). Intermediation reduces punishment (and reward).American Economic Journal: Microeconomics, 3(4), 77-106. [34] Cohn A., Maréchal M. A., Tannenbaum D., & Zünd C. L. (2019). Civic honesty around the globe.Science, 365(6448), 70-73. [35] Constantinescu, M., & Kaptein, M. (2025). Responsibility gaps, LLMs & organisations: Many agents, many levels, and many interactions.Science and Engineering Ethics, 31(6), 36. [36] Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment.Cognition, 108(2), 353-380. [37] Dai Z., Galeotti F., & Villeval M. C. (2018). Cheating in the lab predicts fraud in the field: An experiment in public transportation.Management Science, 64(3), 1081-1100. [38] Dana J., Weber R. A., & Kuang J. X. (2007). Exploiting moral wiggle room: Experiments demonstrating an illusory preference for fairness.Economic Theory, 33, 67-80. [39] De Groot J. I., Bondy K., & Schuitema G. (2021). Listen to others or yourself? The role of personal norms on the effectiveness of social norm interventions to change pro-environmental behavior. Journal of Environmental Psychology, 78, Article 101688. [40] de Melo C. M., Marsella S., & Gratch J. (2019). Human cooperation when acting through autonomous machines.Proceedings of the National Academy of Sciences, 116(9), 3482-3487. [41] Diaferia L., Blohm I., De Rossi L. M., & Salviotti G. (2022). When standard is not enough: A conceptualization of AI systems’ customization and its antecedents. In Proceedings of ICIS 2022, Copenhagen. [42] Dobbe R., Gilbert T. K., & Mintz Y. (2021). Hard choices in artificial intelligence. Artificial Intelligence, 300, Article 103555. [43] Dvorak F., Stumpf R., Fehrler S., & Fischbacher U. (2024). Generative AI triggers welfare-reducing decisions in humans. arXiv. https://arxiv.org/abs/2401.12773. [44] Dzindolet M. T., Peterson S. A., Pomranky R. A., Pierce L. G., & Beck H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697-718. [45] Engl F., Riedl A., & Weber R. (2021). Spillover effects of institutions on cooperative behavior, preferences, and beliefs.American Economic Journal: Microeconomics, 13(4), 261-299. [46] Erat, S. (2013). Avoiding lying: The case of delegated deception.Journal of Economic Behavior & Organization, 93, 273-278. [47] Fehr, E., & Fischbacher, U. (2004). Third-party punishment and social norms.Evolution and Human Behavior, 25(2), 63-87. [48] Fehr E., Kirchsteiger G., & Riedl A. (1993). Does fairness prevent market clearing? An experimental investigation.Quarterly Journal of Economics, 108(2), 437-459. [49] Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation.The Quarterly Journal of Economics, 114(3), 817-868. [50] Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137-140. [51] Feier T., Gogoll J., & Uhl M. (2022). Hiding behind machines: Artificial agents may help to evade punishment.Science and Engineering Ethics, 28(2), 19. [52] Fernández Domingos E., Terrucha I., Suchon R., Grujić J., Burguillo J. C., Santos F. C., & Lenaerts T. (2022). Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma. Scientific Reports, 12(1), Article 8492. [53] Feuerriegel S., Shrestha Y. R., von Krogh G., & Zhang C. (2022). Bringing artificial intelligence to business management.Nature Machine Intelligence, 4(7), 611-613. [54] Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating.Journal of the European Economic Association, 11(3), 525-547. [55] Fischbacher, U., & Gächter, S. (2010). Social preferences, beliefs, and the dynamics of free riding in public goods experiments.American Economic Review, 100(1), 541-556. [56] Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical.Philosophy & Technology, 32(2), 185-193. [57] Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents.Minds and Machines, 14(3), 349-379. [58] Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research.Academy of Management Annals, 14(2), 627-660. [59] Giroux M., Kim J., Lee J. C., & Park J. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI.Journal of Business Ethics, 178(4), 1027-1041. [60] Gratch, J., & Fast, N. J. (2022). The power to harm: AI assistants pave the way to unethical behavior. Current Opinion in Psychology, 47, Article 101382. [61] Gravert, C., & Collentine, L. O. (2021). When nudges aren’t enough: Norms, incentives and habit formation in public transport usage. Journal of Economic Behavior & Organization, 190, 1-14. [62] Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains.Trends in Cognitive Sciences, 11(8), 322-323. [63] Hamman J. R., Loewenstein G., & Weber R. A. (2010). Self-interest through delegation: An additional rationale for the principal-agent relationship.American Economic Review, 100(4), 1826-1846. [64] Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence.California Management Review, 61(4), 5-14. [65] Hertwig, R., & Engel, C. (2016). Homo ignorans: Deliberately choosing not to know.Perspectives on Psychological Science, 11(3), 359-372. [66] Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education.European Journal of Education, 57(4), 542-570. [67] Holzmeister F., Holmén M., Kirchler M., Stefan M., & Wengström E. (2023). Delegation decisions in finance.Management Science, 69(8), 4828-4844. [68] Hong J. W., Cruz I., & Williams D. (2021). AI, you can drive my car: How we evaluate human drivers vs. self- driving cars. Computers in Human Behavior, 125, Article 106944. [69] Igdalova, A., & Chamberlain, R. (2025). Slow looking at still art: The effect of manipulating audio context and image category on mood and engagement during an online slow looking exercise.Psychology of Aesthetics, Creativity, and the Arts, 19(3), 522-534. [70] Ishowo-Oloko F., Bonnefon J.-F., Soroye Z., Crandall J., Rahwan I., & Rahwan T. (2019). Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation.Nature Machine Intelligence, 1(11), 517-521. [71] Ivcevic Z., Menges J., & Miller A. (2020, March 20). How common is unethical behavior in U.S. organizations? Harvard Business Review. https://hbr.org/2020/03/how-common-is-unethical-behavior-in-u-s-organizations [72] Jago A. S., Raveendhran R., Fast N., & Gratch J. (2024). Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles. Journal of Experimental Social Psychology, 110, Article 104553. [73] Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issue-contingent model.Academy of Management Review, 16(2), 366-395. [74] Karpus J., Krüger A., Verba J. T., Bahrami B., & Deroy O. (2021). Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience, 24(6), Article 102521. [75] Keding, C. (2021). Understanding the interplay of artificial intelligence and strategic management: Four decades of research in review.Management Review Quarterly, 71(1), 91-134. [76] Köbis N., Bonnefon J. F., & Rahwan I. (2021). Bad machines corrupt good morals.Nature Human Behaviour, 5(6), 679-685. [77] Köbis N., Rahwan Z., Rilla R., Supriyatno B. I., Bersch C., Ajaj T., … Rahwan I. (2025). Delegation to artificial intelligence can increase dishonest behaviour.Nature, 646, 126-134. [78] Kocher M. G., Schudy S., & Spantig L. (2018). I lie? We lie! Why? Experimental evidence on a dishonesty shift in groups.Management Science, 64(9), 3995-4008. [79] Kouchaki, M., & Smith, I. H. (2025). Moral decision-making in organizations.Annual Review of Organizational Psychology and Organizational Behavior, 12, 45-72. [80] Laakasuo M., Palomäki J., & Köbis N. (2021). Moral uncanny valley: A robot’s appearance moderates how its decisions are judged.International Journal of Social Robotics, 13(7), 1679-1688. [81] Li J., Song T., Xue B., & Lee Y. C. (2025). We shape AI, and thereafter AI shape us: Humans align with AI through social influences. In Proceedings of the ICLR 2025 Workshop on Bidirectional Human-AI Alignment. International Conference on Learning Representations (ICLR). [82] Liehner G. L., Brauner P., Schaar A. K., & Ziefle M. (2021). Delegation of moral tasks to automated agents- The impact of risk and context on trusting a machine to perform a task.IEEE Transactions on Technology and Society, 3(1), 46-57. [83] Liu Y., Dai C., & Cao Y. (2025). Cognitive load and moral decision-making in moral dilemmas under virtual reality: The role of empathy for pain.Current Psychology, 44(6), 5279-5297. [84] Makovi K., Bonnefon J. F., Oudah M., Sargsyan A., & Rahwan T. (2025). Rewards and punishments help humans overcome biases against cooperation partners assumed to be machines. iScience, 28(7), Article 112833. [85] Malik N., Tripathi S. N., Kar A. K., & Gupta S. (2022). Impact of artificial intelligence on employees working in industry 4.0 led organizations.International Journal of Manpower, 43(2), 334-354. [86] Malle, B. F. (2021). Moral judgments.Annual Review of psychology, 72(1), 293-318. [87] Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction(pp. 117-124). Association for Computing Machinery. [88] Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5, Article 100154. [89] Mazar N., Amir O., & Ariely D. (2008). The dishonesty of honest people: A theory of self-concept maintenance.Journal of Marketing Research, 45(6), 633-644. [90] Milgram, S. (1974). Obedience to authority: An experimental view. Harper & Row. [91] Mittelstadt B. D., Allo P., Taddeo M., Wachter S., & Floridi L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), Article 2053951716679679. [92] Moser C., Den Hond F., & Lindebaum D. (2022). Morality in the age of artificially intelligent algorithms.Academy of Management Learning & Education, 21(1), 139-155. [93] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers.Journal of Social Issues, 56(1), 81-103. [94] Oexl, R., & Grossman, Z. J. (2013). Shifting the blame to a powerless intermediary.Experimental Economics, 16(3), 306-312. [95] Paharia N., Kassam K. S., Greene J. D., & Bazerman M. H. (2009). Dirty work, clean hands: The moral psychology of indirect agency.Organizational Behavior and Human Decision Processes, 109(2), 134-141. [96] Parasuraman R., Sheridan T. B., & Wickens C. D. (2000). A model for types and levels of human interaction with automation.IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3), 286-297. [97] Pillutla, M. M., & Murnighan, J. K. (1996). Unfairness, anger, and spite: Emotional rejections of ultimatum offers.Organizational Behavior and Human Decision Processes, 68(3), 208-224. [98] Qin X., Lu J. G., Chen C., Zhou X., Gan Y., Li W., & Song L. L. (2024). Artificial intelligence quotient (AIQ).Available at SSRN 4787320. [99] Rabin, M. (1993). Incorporating fairness into game theory and economics.The American Economic Review, 83(5), 1281-1302. [100] Rahwan I., Cebrian M., Obradovich N., Bongard J., Bonnefon J.-F., Breazeal C., … Wellman M. (2019). Machine behaviour.Nature, 568(7753), 477-486. [101] Ram, J. (2025). Moral decision-making in AI: A comprehensive review and recommendations. Technological Forecasting and Social Change, 217, Article 124150. [102] Ross, S. A. (1973). The economic theory of agency: The principal's problem.The American Economic Review, 63(2), 134-139. [103] Russell S., Hauert S., Altman R., & Veloso M. (2015). Ethics of artificial intelligence.Nature, 521(7553), 415-418. [104] Salminen J., Kaate I., Kamel A. M. S., Jung S. G., & Jansen B. J. (2021). How does personification impact ad performance and empathy? An experiment with online advertising.International Journal of Human-Computer Interaction, 37(2), 141-155. [105] Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them.Philosophy & Technology, 34(4), 1057-1084. [106] Sapkota R., Roumeliotis K. I.,& Karkee, M.(2026). AI agents vs. agentic AI: A conceptual taxonomy, applications and challenges. Information Fusion, 126, Article 103599. https://doi.org/10.1016/j.inffus.2025.103599 [107] Schneider, S., & Leyer, M. (2019). Me or information technology? Adoption of artificial intelligence in the delegation of personal strategic decisions.Managerial and Decision Economics, 40(3), 223-231. [108] Shortliffe, E. H., & Sepúlveda, M. J. (2018). Clinical decision support in the era of artificial intelligence. JAMA, 320(21), 2199-2200. [109] Simon H. A.(1997). Models of bounded rationality: Empirically grounded economic reason (Vol. 3). MIT Press. [110] Steffel M., Williams E. F., & Perrmann-Graham J. (2016). Passing the buck: Delegating choices to others to avoid responsibility and blame.Organizational Behavior and Human Decision Processes, 135, 32-44. [111] Sundar S. S.(2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73-100). MIT Press. [112] Sullivan, Y. W., & Fosso Wamba, S. (2022). Moral judgments in the age of artificial intelligence.Journal of Business Ethics, 178(4), 917-943. [113] Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty.Science, 185(4157), 1124-1131. [114] von Bieberstein F., Feess E., & Packham N. (2026). Multi-step delegation and the frequency of immoral decisions: Theory and experiment. European Economic Review, 181, Article 105159. [115] Villegas-Galaviz, C., & Martin, K. (2024). Moral distance, AI, and the ethics of care.AI & Society, 39(4), 1695-1706. [116] Vu L., Soraperra I., Leib M., van der Weele J., & Shalvi S. (2023). Ignorance by choice: A meta-analytic review of willful ignorance.Psychological Bulletin, 149(9-10), 611-635. [117] Wallach W.,& Allen, C. (2009). Moral machines: Teaching robots right from wrong Oxford University Press Teaching robots right from wrong. Oxford University Press. [118] Wang, X., & Qiu, X. (2024). The positive effect of artificial intelligence technology transparency on digital endorsers: Based on the theory of mind perception. Journal of Retailing and Consumer Services, 78, Article 103777. [119] Wang X., Xie H., Wang Y., Xiao C., Chen H., Sargeant H., .. Sun M. (2025). Large language models' complicit responses to illicit instructions across socio-legal contexts. arXiv. https://arxiv.org/abs/2511.20736 [120] Weiss, A., & Forstmann, M. (2024). Religiosity predicts the delegation of decisions between moral and self-serving immoral outcomes. Journal of Experimental Social Psychology, 113, Article 104605. [121] Zhai X., Chu X., Chai C. S., Jong M. S. Y., Istenic A., Spector M., .. Li Y. (2021). A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity, 2021(1), Article 8812542. [122] Zhang C., Zhu W., Ding J., Wu Y., & Chen X. (2023). Ethical impact of artificial intelligence in managerial accounting. International Journal of Accounting Information Systems, 49, Article 100619. [123] Zhao W., Su K., Zhu H., Kaiser M., Fan M., Zou Y., .. Yin D. (2024). Activity flow under the manipulation of cognitive load and training. Neuroimage, 297, Article 120761. [124] Zhou J., Corbett F., Byun J., Porat T., & van Zalk N. (2025). Psychological and behavioural responses in human-agent vs. human-human interactions: A systematic review and meta-analysis. arXiv. https://doi.org/10.48550/arXiv.2509.21542 |
| [1] | 明晓东, 付静宇, 白新文, 杨建锋. 正念何以减少非伦理行为?双系统理论的视角[J]. 心理科学进展, 2023, 31(10): 1785-1799. |
| [2] | 刘传军, 廖江群. 道德困境研究的范式沿革及其理论价值[J]. 心理科学进展, 2021, 29(8): 1508-1520. |
| [3] | 张银花, 李红, 吴寅. 计算模型在道德认知研究中的应用[J]. 心理科学进展, 2020, 28(7): 1042-1055. |
| [4] | 徐科朋, 杨凌倩, 吴家虹, 薛宏, 张姝玥. CNI模型在道德决策研究中的应用[J]. 心理科学进展, 2020, 28(12): 2102-2113. |
| [5] | 钟毅平, 占友龙, 李琎, 范伟. 道德决策的机制及干预研究: 自我相关性与风险水平的作用[J]. 心理科学进展, 2017, 25(7): 1093-1102. |
| [6] | 李晓明;王新超;傅小兰. 企业中的道德决策[J]. 心理科学进展, 2007, 15(4): 665-673. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||