Acta Psychologica Sinica ›› 2025, Vol. 57 ›› Issue (11): 1973-1987.doi: 10.3724/SP.J.1041.2025.1973
• Reports of Empirical Studies • Previous Articles Next Articles
WEI Xinni1, YU Feng2(
), PENG Kaiping3(
)
Published:2025-11-25
Online:2025-09-25
Contact:
YU Feng, E-mail: psychpedia@whu.edu.cn; PENG Kaiping, E-mail: pengkp@tsinghua.edu.cn
WEI Xinni, YU Feng, PENG Kaiping. (2025). Perceived unsustainability decreases acceptance of artificial intelligence. Acta Psychologica Sinica, 57(11), 1973-1987.
| [1] | Ahn, M., Kang, J., & Hustvedt, G. (2016). A model of sustainable household technology acceptance. International Journal of Consumer Studies, 40, 83-91. https://doi.org/10.1111/ijcs.12217 |
| [2] | Al-Sharafi, M. A., Al-Emran, M., Arpaci, I., Iahad, N. A., AlQudah, A. A., Iranmanesh, M., & Al-Qaysi, N. (2023). Generation Z use of artificial intelligence products and its impact on environmental sustainability: A cross-cultural comparison. Computers in Human Behavior, 143, 107708. https://doi.org/10.1016/j.chb.2023.107708 |
| [3] | Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15-26. https://doi.org/10.1609/aimag.v28i4.2065 |
| [4] | Averdung, A., & Wagenfuehrer, D. (2011). Consumers’ acceptance, adoption and behavioural intentions regarding environmentally sustainable innovations. E3 Journal of Business Management and Economics, 2(3), 98-106. |
| [5] | Banks, J. (2019). A perceived moral agency scale: Development and validation of a metric for humans and social machines. Computers in Human Behavior, 90, 363-371. https://doi.org/10.1016/j.chb.2018.08.028 |
| [6] | Baudier, P., Ammi, C., & Deboeuf-Rouchon, M. (2020). Smart home: Highly-educated students' acceptance. Technological Forecasting and Social Change, 153, 119355. https://doi.org/10.1016/j.techfore.2018.06.043 |
| [7] | Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4-27. https://doi.org/10.1037/xge0001250 |
| [8] | Braun Kohlová, M., & Urban, J. (2020). Buy green, gain prestige and social status. Journal of Environmental Psychology, 69, 101416. https://doi.org/10.1016/j.jenvp.2020.101416 |
| [9] | Bretter, C., Unsworth, K. L., Kaptan, G., & Russell, S. V. (2023). It is just wrong: Moral foundations and food waste. Journal of Environmental Psychology, 88, 102021. https://doi.org/10.1016/j.jenvp.2023.102021 |
| [10] |
Camilleri, A. R., Larrick, R., Hossain, S., & Patino-Echeverri, D. (2019). Consumers underestimate the emissions associated with food but are aided by labels. Nature Climate Change, 9, 53-58. https://doi.org/10.1038/s41558-018-0354-z
doi: 10.1038/s41558-018-0354-z URL |
| [11] | Chen, S. H., Qiu, H., Xiao, H., He, W., Mou, J., & Siponen, M. T. (2020). Consumption behavior of eco-friendly products and applications of ICT innovation. Journal of Cleaner Production, 287, 125436. https://doi.org/10.1016/j.jclepro.2020.125436 |
| [12] | Constantinescu, M. V., Vică, C., Uszkai, R., & Voinea, C. (2022). Blame it on the AI? On the moral responsibility of artificial moral advisors. Philosophy & Technology, 35(2), 35. https://doi.org/10.1007/s13347-022-00529-z |
| [13] | De Canio, F. (2023). Consumer willingness to pay more for pro- environmental packages: The moderating role of familiarity. Journal of Environmental Management, 339, 117828. https://doi.org/10.1016/j.jenvman.2023.117828 |
| [14] |
Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024-1037. https://doi.org/10.1037/0022-3514.53.6.1024
doi: 10.1037//0022-3514.53.6.1024 URL pmid: 3320334 |
| [15] | Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence. 2, 423-425. |
| [16] | Djeffal, C., Siewert, M. B., & Wurster, S. (2022). Role of the state and responsibility in governing artificial intelligence: A comparative analysis of AI strategies. Journal of European Public Policy, 29(11), 1799-1821. https://doi.org/10.1080/13501763.2022.2094987 |
| [17] | Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data—Evolution, challenges and research agenda. International Journal of Information Management, 48, 63-71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021 |
| [18] | Dunlap, R. E., Van Liere, K. D., Mertig, A. G., & Emmet Jones, R. (2000). Measuring endorsement of the new ecological paradigm: A revised NEP scale. Journal of Social Issues, 56(3), 425-442. https://doi.org/10.1111/0022-4537.00176 |
| [19] | Farrow, K., Grolleau, G., & Ibanez, L. (2017). Social norms and pro-environmental behavior: A review of the evidence. Ecological Economics, 140, 1-13. https://doi.org/10.1016/j.ecolecon.2017.04.017 |
| [20] | Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological Science, 24(1), 56-62. https://doi.org/10.1177/0956797612449177 |
| [21] | Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d |
| [22] | Formosa, P., & Ryan, M. (2021). Making moral machines: Why we need artificial moral agents. AI and Society, 36(3), 839-851. https://doi.org/10.1007/s00146-020-01089-6 |
| [23] | Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535 |
| [24] | Gifford, R., & Sussman, R. (2012). Environmental attitudes. In S. D. Clayton (Ed.), The Oxford handbook of environmental and conservation psychology (pp. 65-80). Oxford University Press. |
| [25] |
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366-385. https://doi.org/10.1037/a0021847
doi: 10.1037/a0021847 URL pmid: 21244182 |
| [26] | Grazzini, L., Acuti, D., & Aiello, G. (2021). Solving the puzzle of sustainable fashion consumption: The role of consumers’ implicit attitudes and perceived warmth. Journal of Cleaner Production, 287, 125579. https://doi.org/10.1016/j.jclepro.2020.125579 |
| [27] | Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. The MIT Press. |
| [28] | Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730. https://doi.org/10.1016/j.chb.2021.106730 |
| [29] |
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834. https://doi.org/10.1037/0033-295x.108.4.814
doi: 10.1037/0033-295x.108.4.814 URL pmid: 11699120 |
| [30] | Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66. https://doi.org/10.1162/0011526042365555 |
| [31] | Haikonen, P. O. (2007). Robot brains: Circuits and systems for conscious machines. John Wiley & Sons. |
| [32] | Hernandez, J. M., Wright, S.A., & Ferminiano Rodrigues, F. (2015). Attributes versus benefits: The role of construal levels and appeal type on the persuasiveness of marketing messages. Journal of Advertising, 44, 243-253. https://doi.org/10.1080/00913367.2014.967425 |
| [33] | Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19-29. https://doi.org/10.1007/s10676-008-9167-5 |
| [34] | Jay, C., Yu, Y., Crawford, I., James, P., Gledson, A., Shaddick, G.,... Topping, D. (2024). Prioritize environmental sustainability in use of AI and data science methods. Nature Geoscience, 17(2), 106-108. https://doi.org/10.1038/s41561-023-01369-y |
| [35] | Jia, F., Soucie, K., Alisat, S., Curtin, D., & Pratt, M. (2017). Are environmental issues moral issues? Moral identity in relation to protecting the natural world. Journal of Environmental Psychology, 52, 104-113. https://doi.org/10.1016/j.jenvp.2017.06.004 |
| [36] | Johnson, S. G., & Ahn, J. (2020). Principles of moral accounting: How our intuitive moral sense balances rights and wrongs. Cognition, 206, 104467. https://doi.org/10.1016/j.cognition.2020.104467 |
| [37] | Kelly, S., Kaye, S., & Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Informatics, 77, 101925. https://doi.org/10.1016/j.tele.2022.101925 |
| [38] | Kneer, M., & Stuart, M. T. (2021). Playing the blame game with robots. In Bethel C., Paiva A., Broadbent E., Feil-Seifer D., Szafi D.r (Chairs), Companion of the 2021 ACM/IEEE international conference on human-robot interaction (pp. 407-411). Association for Computing Machinery. https://doi.org/10.1145/3434074.3447202 |
| [39] |
Krettenauer, T. (2017). Pro-environmental behavior and adolescent moral development. Journal of Research on Adolescence, 27(3), 581-593. https://doi.org/10.1111/jora.12300
doi: 10.1111/jora.12300 URL pmid: 28776840 |
| [40] | Lakoff, G. (1995). Metaphor, morality, and politics or, why conservatives have left liberals in the dust. Social Research, 62, 177-213. |
| [41] |
MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1(4), 173-181. https://doi.org/10.1023/a:1026595011371
doi: 10.1023/a:1026595011371 URL pmid: 11523746 |
| [42] | Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5, 100154. https://doi.org/10.1016/j.chbr.2021.100154 |
| [43] | McCright, A. M., & Dunlap, R. E. (2011). The politicization of climate change and polarization in the American public's views of global warming, 2001-2010. The Sociological Quarterly, 52(2), 155-194. https://doi.org/10.1111/j.1533-8525.2011.01198.x |
| [44] | Mert, W., Suschek-Berger, J., & Tritthart, W. (2008). Consumer acceptance of smart appliances (D 5. Graz: Inter-University Research Centre on Technology, Work and Culture. |
| [45] | Monroe, A. E., Dillon, K. D., & Malle, B. F. (2014). Bringing free will down to earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition, 27, 100-108. https://doi.org/10.1016/j.concog.2014.04.011 |
| [46] | Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21. https://doi.org/10.1109/MIS.2006.80 |
| [47] | Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104 |
| [48] | Pal., N. R. (2020). In search of trustworthy and transparent intelligent systems with human-like cognitive and reasoning capabilities. Frontiers in Robotics and AI, 7, 76. https://doi.org/10.3389/frobt.2020.00076 |
| [49] | Park, E., Hwang, B., Ko, K., & Kim, D. (2017). Consumer acceptance analysis of the home energy management system. Sustainability, 9(12), 2351. https://doi.org/10.3390/su9122351 |
| [50] | Ray, J. L., Mende-Siedlecki, P., Gantman, A., & Van Bavel, J. J. (2021). The role of morality in social cognition. In The Neural Basis of Mentalizing (pp. 555-566). Springer. https://doi.org/10.1007/978-3-030-51890-5.ch28 |
| [51] | Schwartz, D., & Loewenstein, G. (2020). Encouraging pro-environmental behaviour through green identity labelling. Nature Sustainability, 3(9), 746-752. https://doi.org/10.1038/s41893-020-0543-4 |
| [52] | Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54-63. https://doi.org/10.1145/3381831 |
| [53] | Siala, H., & Wang, Y. (2022). SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine, 296, 114782. https://doi.org/10.1016/j.socscimed.2022.114782 |
| [54] | Strubell, E., Ganesh, A., & McCallum, A. (2020). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13693-13696. https://doi.org/10.1609/aaai.v34i09.7123 |
| [55] | Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6, 23-30. |
| [56] | Swanepoel, D. (2021). Does Artificial Intelligence Have Agency? In: Clowes, R.W., Gärtner, K., Hipólito, I. (Eds.) The mind-technology problem. Studies in brain and mind (vol. 18, pp.88-104). Springer, Cham |
| [57] |
Tetlock, P. E. (2002). Social functionalist frameworks for judgment and choice: Intuitive politicians, theologians, and prosecutors. Psychological Review, 109(3), 451-471. https://doi.org/10.1037/0033-295X.109.3.451
URL pmid: 12088240 |
| [58] |
Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7), 320-324. https://doi.org/10.1016/S1364-6613(03)00135-9
URL pmid: 12860191 |
| [59] | Urban, J., Bahník, Š., & Kohlová, M. B. (2023). Pro-environmental behavior triggers moral inference, not licensing by observers. Environment and Behavior, 55(1-2), 74-98. https://doi.org/10.1177/00139165231163547 |
| [60] | van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics, 1, 213-218. https://doi.org/10.1007/s43681-021-00043-6 |
| [61] |
van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), 719-735. https://doi.org/10.1007/s11948-018-0030-8
doi: 10.1007/s11948-018-0030-8 URL pmid: 29460081 |
| [62] | Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540 |
| [63] | Verdecchia, R., Sallou, J., & Cruz, L. (2023). A systematic review of Green AI. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(4), e1507. https://doi.org/10.1002/widm.1507 |
| [64] | Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S.,... Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 1-10. https://doi.org/10.1038/s41467-019-14108-y |
| [65] | Wilson, A., Stefanik, C., & Shank, D. B. (2022). How do people judge the immorality of artificial intelligence versus humans committing moral wrongs in real-world situations? Computers in Human Behavior Reports, 8, 100229. https://doi.org/10.1016/j.chbr.2022.100229 |
| [66] | Wyss, A. M., Knoch, D., & Berger, S. (2022). When and how pro-environmental attitudes turn into behavior: The role of costs, benefits, and self-control. Journal of Environmental Psychology, 79, 101748. https://doi.org/10.1016/j.jenvp.2021.101748 |
| [67] | Yu, F. (2020). On AI and Human Beings. Frontiers, (1), 30-36. |
| [68] | Yu, F., & Xu, L. (2018). How to make an ethical intelligence? Answer from a psychological perspective. Global Journal of Media Studies, 5(4), 24-42. |
| [69] | Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54. https://doi.org/10.1016/j.ijhcs.2016.12.008 |
| [1] | CEN Yushan, XIA Lingxiang, HUANG Runyu, LV Jie. The two-factor structure of harm aversion and the mechanism underlying its function of resisting aggression [J]. Acta Psychologica Sinica, 2025, 57(7): 1231-1247. |
| [2] | JIAO Liying, LI Chang-Jin, CHEN Zhen, XU Hengbin, XU Yan. When AI “possesses” personality: Roles of good and evil personalities influence moral judgment in large language models [J]. Acta Psychologica Sinica, 2025, 57(6): 929-946. |
| [3] | ZHANG Yanbo, HUANG Feng, MO Liuling, LIU Xiaoqian, ZHU Tingshao. Suicidal ideation data augmentation and recognition technology based on large language models [J]. Acta Psychologica Sinica, 2025, 57(6): 987-1000. |
| [4] | SONG Ru, WU Jun, LIU Caixia, LIU Jie, CUI Fang. Is the Bystander Truly Objective? The Moderation of Third-Party Moral Judgment by Perspective Taking in Moral Scenarios [J]. Acta Psychologica Sinica, 2025, 57(6): 1070-1082. |
| [5] | XU Liying, ZHANG Yuyan, YU Feng. Perceived Robot Threats Reduce Pro-Social Tendencies [J]. Acta Psychologica Sinica, 2025, 57(4): 671-699. |
| [6] | LI Hui, LIU Siyi, PANG Yi. The effect of a social robot on the sharing behavior of 3- to 5-year-old children [J]. Acta Psychologica Sinica, 2025, 57(4): 573-583. |
| [7] | WU Michael Shengtao, PENG Kaiping. Human advantages and psychological transformations in the era of artificial intelligence [J]. Acta Psychologica Sinica, 2025, 57(11): 1879-1884. |
| [8] | LI Bin, RUI Jianxi, YU Weinan, LI Aimei, YE Maolin. When design meets AI: The impact of AI design products on consumers’ response patterns [J]. Acta Psychologica Sinica, 2025, 57(11): 1914-1932. |
| [9] | HUANG Feng, DING Huimin, LI Sijia, HAN Nuo, DI Yazheng, LIU Xiaoqian, ZHAO Nan, LI Linyan, ZHU Tingshao. Self-help AI psychological counseling system based on large language models and its effectiveness evaluation [J]. Acta Psychologica Sinica, 2025, 57(11): 2022-2042. |
| [10] | WANG Xiangkun, XIN Ziqiang, HOU You. Cross-temporal meta-analyses of changes and macro causes in moral disengagement among Chinese middle school and college students [J]. Acta Psychologica Sinica, 2024, 56(7): 859-875. |
| [11] | YE Weiling, XU Su, ZHOU Xinyue. Impact of repeated two-syllable brand names on consumer ethical responses in different moral contexts: A mind perception theory perspective [J]. Acta Psychologica Sinica, 2024, 56(5): 650-669. |
| [12] | YAN Xiao, MO Tiantian, ZHOU Xinyue. The influence of cultural differences between China and the West on moral responsibility judgments of virtual humans [J]. Acta Psychologica Sinica, 2024, 56(2): 161-178. |
| [13] | WANG Chen, CHEN Weicong, HUANG Liang, HOU Suyu, WANG Yiwen. Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis [J]. Acta Psychologica Sinica, 2024, 56(2): 194-209. |
| [14] | ZHENG Yuanxia, ZHONG Min, XIN Cong, LIU Guoxiong, ZHU Liqi. Preschoolers’ selective trust in moral promises [J]. Acta Psychologica Sinica, 2024, 56(12): 1761-1772. |
| [15] | WU Jun, LI Wanchen, YAO Xiaohuan, LIU Jie, CUI Fang. Kindness or fairness: Prosociality and fairness jointly modulate moral judgments [J]. Acta Psychologica Sinica, 2024, 56(11): 1541-1555. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||