心理学报 ›› 2025, Vol. 57 ›› Issue (11): 1973-1987.doi: 10.3724/SP.J.1041.2025.1973 cstr: 32110.14.2025.1973
收稿日期:2024-05-01
发布日期:2025-09-24
出版日期:2025-11-25
通讯作者:
喻丰, E-mail: psychpedia@whu.edu.cn;基金资助:
WEI Xinni1, YU Feng2(
), PENG Kaiping3(
)
Received:2024-05-01
Online:2025-09-24
Published:2025-11-25
摘要:
人工智能有助于生态环境治理和实现社会可持续发展, 但它也正以惊人的速度消耗着能源, 其产生的碳排放也影响着自然环境和人类生存。然而, 暂未有研究关注人工智能产生的环境问题以及人类对此的反应, 因此本研究探究了人机环境决策情境下, 人工智能的可持续性对使用意愿的影响方式、原因和边界条件。预研究采用问卷调查并结合ChatGPT生成的态度词, 考察了人们对人工智能环保系统的使用意愿和态度, 发现人们的使用意愿较高且态度积极。研究1在2个子研究中分别操纵了人工智能可持续性的感知(有vs.无), 并发现低可持续性组被试对人工智能的接受意愿更低, 而且对国家开展人工智能研究的支持度也更低。研究2改变可持续性的操纵方式(低vs.高), 再次通过实验重复了研究2的结果, 并且发现道德而非能动是影响可持续性和接受意愿的中介机制。研究3探索了这一影响可能存在的边界条件, 验证了个体亲环境态度的调节作用。研究结果为人工智能的社会治理提供了心理学依据, 也为人工智能和可持续发展的关系提供了新的启示。
中图分类号:
魏心妮, 喻丰, 彭凯平. (2025). 低可持续性降低人工智能的接受意愿. 心理学报, 57(11), 1973-1987.
WEI Xinni, YU Feng, PENG Kaiping. (2025). Perceived unsustainability decreases acceptance of artificial intelligence. Acta Psychologica Sinica, 57(11), 1973-1987.
| [1] | Ahn, M., Kang, J., & Hustvedt, G. (2016). A model of sustainable household technology acceptance. International Journal of Consumer Studies, 40, 83-91. https://doi.org/10.1111/ijcs.12217 |
| [2] | Al-Sharafi, M. A., Al-Emran, M., Arpaci, I., Iahad, N. A., AlQudah, A. A., Iranmanesh, M., & Al-Qaysi, N. (2023). Generation Z use of artificial intelligence products and its impact on environmental sustainability: A cross-cultural comparison. Computers in Human Behavior, 143, 107708. https://doi.org/10.1016/j.chb.2023.107708 |
| [3] | Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15-26. https://doi.org/10.1609/aimag.v28i4.2065 |
| [4] | Averdung, A., & Wagenfuehrer, D. (2011). Consumers’ acceptance, adoption and behavioural intentions regarding environmentally sustainable innovations. E3 Journal of Business Management and Economics, 2(3), 98-106. |
| [5] | Banks, J. (2019). A perceived moral agency scale: Development and validation of a metric for humans and social machines. Computers in Human Behavior, 90, 363-371. https://doi.org/10.1016/j.chb.2018.08.028 |
| [6] | Banks, J. (2021). Good robots, bad robots: morally valenced behavior effects on perceived mind, morality, and trust. International Journal of Social Robotics, 13(8), 2021-2038. https://doi.org/10.1007/s12369-020-00692-3 |
| [7] | Baudier, P., Ammi, C., & Deboeuf-Rouchon, M. (2020). Smart home: Highly-educated students’ acceptance. Technological Forecasting and Social Change, 153, 119355. https://doi.org/10.1016/j.techfore.2018.06.043 |
| [8] | Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4-27. https://doi.org/10.1037/xge0001250 |
| [9] | Braun Kohlová, M., & Urban, J. (2020). Buy green, gain prestige and social status. Journal of Environmental Psychology, 69, 101416. https://doi.org/10.1016/j.jenvp.2020.101416 |
| [10] | Bretter, C., Unsworth, K. L., Kaptan, G., & Russell, S. V. (2023). It is just wrong: Moral foundations and food waste. Journal of Environmental Psychology, 88, 102021. https://doi.org/10.1016/j.jenvp.2023.102021 |
| [11] |
Camilleri, A. R., Larrick, R., Hossain, S., & Patino-Echeverri, D. (2019). Consumers underestimate the emissions associated with food but are aided by labels. Nature Climate Change, 9, 53-58. https://doi.org/10.1038/s41558-018-0354-z
doi: 10.1038/s41558-018-0354-z URL |
| [12] | Chen, S. H., Qiu, H., Xiao, H., He, W., Mou, J., & Siponen, M. T. (2020). Consumption behavior of eco-friendly products and applications of ICT innovation. Journal of Cleaner Production, 287, 125436. https://doi.org/10.1016/j.jclepro.2020.125436 |
| [13] | Constantinescu, M. V., Vică, C., Uszkai, R., & Voinea, C. (2022). Blame it on the AI? On the moral responsibility of artificial moral advisors. Philosophy & Technology, 35(2), 35. https://doi.org/10.1007/s13347-022-00529-z |
| [14] | De Canio, F. (2023). Consumer willingness to pay more for pro-environmental packages: The moderating role of familiarity. Journal of Environmental Management, 339, 117828. https://doi.org/10.1016/j.jenvman.2023.117828 |
| [15] |
Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024-1037. https://doi.org/10.1037/0022-3514.53.6.1024
doi: 10.1037//0022-3514.53.6.1024 URL pmid: 3320334 |
| [16] | Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence. 2, 423-425. |
| [17] | Djeffal, C., Siewert, M. B., & Wurster, S. (2022). Role of the state and responsibility in governing artificial intelligence: A comparative analysis of AI strategies. Journal of European Public Policy, 29(11), 1799-1821. https://doi.org/10.1080/13501763.2022.2094987 |
| [18] | Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data— Evolution, challenges and research agenda. International Journal of Information Management, 48, 63-71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021 |
| [19] | Dunlap, R. E., Van Liere, K. D., Mertig, A. G., & Emmet Jones, R. (2000). Measuring endorsement of the new ecological paradigm: A revised NEP scale. Journal of Social Issues, 56(3), 425-442. https://doi.org/10.1111/0022-4537.00176 |
| [20] | Farrow, K., Grolleau, G., & Ibanez, L. (2017). Social norms and pro-environmental behavior: A review of the evidence. Ecological Economics, 140, 1-13. https://doi.org/10.1016/j.ecolecon.2017.04.017 |
| [21] |
Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological Science, 24(1), 56-62. https://doi.org/10.1177/0956797612449177
doi: 10.1177/0956797612449177 URL pmid: 23228937 |
| [22] | Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d |
| [23] | Formosa, P., & Ryan, M. (2021). Making moral machines: Why we need artificial moral agents. AI and Society, 36(3), 839-851. https://doi.org/10.1007/s00146-020-01089-6 |
| [24] | Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535 |
| [25] | Gifford,, R., & Sussman, R. (2012). Environmental attitudes. In S. D.Clayton (Ed.), The Oxford handbook of environmental and conservation psychology (pp. 65-80). Oxford University Press. |
| [26] |
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366-385. https://doi.org/10.1037/a0021847
doi: 10.1037/a0021847 URL pmid: 21244182 |
| [27] | Grazzini, L., Acuti, D., & Aiello, G. (2021). Solving the puzzle of sustainable fashion consumption: The role of consumers’ implicit attitudes and perceived warmth. Journal of Cleaner Production, 287, 125579. https://doi.org/10.1016/j.jclepro.2020.125579 |
| [28] | Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press. |
| [29] | Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730. https://doi.org/10.1016/j.chb.2021.106730 |
| [30] |
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834. https://doi.org/10.1037/0033-295x.108.4.814
doi: 10.1037/0033-295x.108.4.814 URL pmid: 11699120 |
| [31] | Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66. https://doi.org/10.1162/0011526042365555 |
| [32] | Haikonen, P. O. (2007). Robot brains: Circuits and systems for conscious machines. John Wiley & Sons. |
| [33] | Hernandez, J. M., Wright, S.A., & Ferminiano Rodrigues, F. (2015). Attributes versus benefits: The role of construal levels and appeal type on the persuasiveness of marketing messages. Journal of Advertising, 44, 243-253. https://doi.org/10.1080/00913367.2014.967425 |
| [34] | Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19-29. https://doi.org/10.1007/s10676-008-9167-5 |
| [35] | Jay, C., Yu, Y., Crawford, I., James, P., Gledson, A., Shaddick, G.,... Topping, D. (2024). Prioritize environmental sustainability in use of AI and data science methods. Nature Geoscience, 17(2), 106-108. https://doi.org/10.1038/s41561-023-01369-y |
| [36] | Jia, F., Soucie, K., Alisat, S., Curtin, D., & Pratt, M. (2017). Are environmental issues moral issues? Moral identity in relation to protecting the natural world. Journal of Environmental Psychology, 52, 104-113. https://doi.org/10.1016/j.jenvp.2017.06.004 |
| [37] | Johnson, S. G., & Ahn, J. (2020). Principles of moral accounting: How our intuitive moral sense balances rights and wrongs. Cognition, 206, 104467. https://doi.org/10.1016/j.cognition.2020.104467 |
| [38] | Kelly, S., Kaye, S., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. https://doi.org/10.1016/j.tele.2022.101925 |
| [39] | Kneer, M., & Stuart, M.T. (2021). Playing the blame game with robots. In Bethel C., Paiva A., Broadbent E., Feil-Seifer D., Szafi D.r (Chairs), Companion of the 2021 ACM/IEEE international conference on human-robot interaction (pp. 407-411). Association for Computing Machinery. https://doi.org/10.1145/3434074.3447202 |
| [40] |
Krettenauer, T. (2017). Pro-environmental behavior and adolescent moral development. Journal of Research on Adolescence, 27(3), 581-593. https://doi.org/10.1111/jora.12300
doi: 10.1111/jora.12300 URL pmid: 28776840 |
| [41] | Lakoff, G. (1995). Metaphor, morality, and politics or, why conservatives have left liberals in the dust. Social Research, 62, 177-213. |
| [42] |
MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1(4), 173-181. https://doi.org/10.1023/a:1026595011371
doi: 10.1023/a:1026595011371 URL pmid: 11523746 |
| [43] | Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5, 100154. https://doi.org/10.1016/j.chbr.2021.100154 |
| [44] | McCright, A. M., & Dunlap, R. E. (2011). The politicization of climate change and polarization in the American public's views of global warming, 2001-2010. The Sociological Quarterly, 52(2), 155-194. https://doi.org/10.1111/j.1533-8525.2011.01198.x |
| [45] | Mert, W., Suschek-Berger, J., & Tritthart, W. (2008). Consumer acceptance of smart appliances (D 5.5 of WP 5 report from Smart-A project). Graz: Inter-University Research Centre on Technology, Work and Culture. |
| [46] | Monroe, A. E., Dillon, K. D., & Malle, B. F. (2014). Bringing free will down to earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition, 27, 100-108. https://doi.org/10.1016/j.concog.2014.04.011 |
| [47] | Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21. https://doi.org/10.1109/MIS.2006.80 |
| [48] | Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104 |
| [49] |
Pal., N. R. (2020). In search of trustworthy and transparent intelligent systems with human-like cognitive and reasoning capabilities. Frontiers in Robotics and AI, 7, 76. https://doi.org/10.3389/frobt.2020.00076
doi: 10.3389/frobt.2020.00076 URL pmid: 33501243 |
| [50] | Park, E., Hwang, B., Ko, K., & Kim, D. (2017). Consumer acceptance analysis of the home energy management system. Sustainability, 9(12), 2351. https://doi.org/10.3390/su9122351 |
| [51] | Ray, J. L., Mende-Siedlecki, P., Gantman, A., & Van Bavel, J. J. (2021). The role of morality in social cognition. In The Neural Basis of Mentalizing (pp. 555-566). Springer. https://doi.org/10.1007/978-3-030-51890-5.ch28 |
| [52] | Schwartz, D., & Loewenstein, G. (2020). Encouraging pro- environmental behaviour through green identity labelling. Nature Sustainability, 3(9), 746-752. https://doi.org/10.1038/s41893-020-0543-4 |
| [53] | Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54-63. https://doi.org/10.1145/3381831 |
| [54] | Siala, H., & Wang, Y. (2022). SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine, 296, 114782. https://doi.org/10.1016/j.socscimed.2022.114782 |
| [55] | Stein, J.-P., Messingschlager, T., Gnambs, T., Hutmacher, F., & Appel, M. (2024). Attitudes towards AI: Measurement and associations with personality. Scientific Reports, 14, 2909. https://doi.org/10.1038/s41598-024-53335-2 |
| [56] | Strubell, E., Ganesh, A., & McCallum, A. (2020). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13693-13696. https://doi.org/10.1609/aaai.v34i09.7123 |
| [57] | Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6, 23-30. |
| [58] | Swanepoel,, D. (2021). Does Artificial Intelligence Have Agency? In: ClowesR.W., GärtnerK., HipólitoI. (Eds.) The mind-technology problem. Studies in brain and mind (vol. 18, pp. 88-104). Springer, Cham. |
| [59] |
Tetlock, P. E. (2002). Social functionalist frameworks for judgment and choice: Intuitive politicians, theologians, and prosecutors. Psychological Review, 109(3), 451-471. https://doi.org/10.1037/0033-295X.109.3.451
URL pmid: 12088240 |
| [60] |
Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320-324. https://doi.org/10.1016/S1364-6613(03)00135-9
URL pmid: 12860191 |
| [61] | Urban, J., Bahník, Š., & Kohlová, M. B. (2023). Pro- environmental behavior triggers moral inference, not licensing by observers. Environment and Behavior, 55(1-2), 74-98. https://doi.org/10.1177/00139165231163547 |
| [62] | van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics, 1, 213-218. https://doi.org/10.1007/s43681-021-00043-6 |
| [63] |
van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719-735. https://doi.org/10.1007/s11948-018-0030-8
doi: 10.1007/s11948-018-0030-8 URL pmid: 29460081 |
| [64] | Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540 |
| [65] | Verdecchia, R., Sallou, J., & Cruz, L. (2023). A systematic review of Green AI. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(4), e1507. https://doi.org/10.1002/widm.1507 |
| [66] | Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S.,... Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 1-10. https://doi.org/10.1038/s41467-019-14108-y |
| [67] | Wilson, A., Stefanik, C., & Shank, D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations? Computers in Human Behavior Reports, 8, 100229. https://doi.org/10.1016/j.chbr.2022.100229 |
| [68] | Wyss, A. M., Knoch, D., & Berger, S. (2022). When and how pro-environmental attitudes turn into behavior: The role of costs, benefits, and self-control. Journal of Environmental Psychology, 79, 101748. https://doi.org/10.1016/j.jenvp.2021.101748 |
| [69] | Yu, F. (2020). On AI and Human Beings. Frontiers, (1), 30-36. |
| [喻丰. (2020). 论人工智能与人之为人. 人民论坛·学术前沿, (1), 30-36.] | |
| [70] | Yu, F., & Xu, L. (2018). How to make an ethical intelligence? Answer from a psychological perspective. Global Journal of Media Studies, 5(4), 24-42. |
| [喻丰, 许丽颖. (2018). 如何做出道德的人工智能体?——心理学的视角. 全球传媒学刊, 5 (4), 24-42.] | |
| [71] | Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54. https://doi.org/10.1016/j.ijhcs.2016.12.008 |
| [1] | 徐梓豪, 朱冬青, 闫小敏. 为何最优化患者对医生更警惕?道德推脱的中介作用[J]. 心理学报, 2026, 58(1): 57-73. |
| [2] | 胡小勇, 李穆峰, 李悦, 李凯, 喻丰. 人工智能决策的道德缺失效应及其机制与应对策略[J]. 心理学报, 2026, 58(1): 74-95. |
| [3] | 范伟, 杨颖, 郭希亚, 林卓铭, 钟毅平. 不道德行为中道德标准对自我欺骗的影响: 来自ERP的证据[J]. 心理学报, 2025, 57(8): 1414-1436. |
| [4] | 岑雨珊, 夏凌翔, 黄润玉, 吕洁. 伤害厌恶的二因素结构及其抑制攻击的机制[J]. 心理学报, 2025, 57(7): 1231-1247. |
| [5] | 农梅兰, 朱瑜, 王雁飞. 以家庭之名行不义之事: 不道德亲家庭行为的影响及其机制[J]. 心理学报, 2025, 57(7): 1262-1280. |
| [6] | 焦丽颖, 李昌锦, 陈圳, 许恒彬, 许燕. 当AI“具有”人格:善恶人格角色对大语言模型道德判断的影响[J]. 心理学报, 2025, 57(6): 929-946. |
| [7] | 章彦博, 黄峰, 莫柳铃, 刘晓倩, 朱廷劭. 基于大语言模型的自杀意念文本数据增强与识别技术[J]. 心理学报, 2025, 57(6): 987-1000. |
| [8] | 宋茹, 吴珺, 刘彩霞, 刘洁, 崔芳. 旁观者清?道德情景中不同角色视角的启动调节第三方道德判断[J]. 心理学报, 2025, 57(6): 1070-1082. |
| [9] | 许丽颖, 张语嫣, 喻丰. 感知机器人威胁降低亲社会倾向[J]. 心理学报, 2025, 57(4): 671-699. |
| [10] | 王国轩, 龙立荣, 李绍龙, 孙芳, 望家晴, 黄世英子. 负面绩效反馈下员工绩效改进动机的人机比较[J]. 心理学报, 2025, 57(2): 298-314. |
| [11] | 吴胜涛, 彭凯平. 智能时代的人类优势与心理变革(代序)[J]. 心理学报, 2025, 57(11): 1879-1884. |
| [12] | 周详, 白博仁, 张婧婧, 刘善柔. 和而不同:生成式人工智能凸显下人类的社会创造策略[J]. 心理学报, 2025, 57(11): 1901-1913. |
| [13] | 李斌, 芮建禧, 俞炜楠, 李爱梅, 叶茂林. 当设计遇见AI:人工智能设计产品对消费者响应模式的影响[J]. 心理学报, 2025, 57(11): 1914-1932. |
| [14] | 黄峰, 丁慧敏, 李思嘉, 韩诺, 狄雅政, 刘晓倩, 赵楠, 李林妍, 朱廷劭. 基于大语言模型的自助式AI心理咨询系统构建及其效果评估[J]. 心理学报, 2025, 57(11): 2022-2042. |
| [15] | 许丽颖, 赵一骏, 喻丰. 人工智能主管提出的道德行为建议更少被遵从[J]. 心理学报, 2025, 57(11): 2060-2082. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||