[1] Ames, D. R. (2004). Strategies for social inference: A similarity contingency model of projection and stereotyping in attribute prevalence estimates.Journal of Personality and Social Psychology, 87(5), 573-585. [2] Ames D. R., Weber E. U., & Zou X. (2012). Mind-reading in strategic interaction: The impact of perceived similarity on projection and stereotyping.Organizational Behavior and Human Decision Processes, 117(1), 96-110. [3] Ashrafian, H. (2015). AIonAI: A humanitarian law of artificial intelligence and robotics.Science and Engineering Ethics, 21(1), 29-40. [4] Asimov, I. (1942). Runaround. In I, Robot(p. 40). Doubleday. [5] Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., .. Rahwan I. (2018). The moral machine experiment.Nature, 563(7729), 59-64. [6] Babel F., Kraus J., Miller L., Kraus M., Wagner N., Minker W., & Baumann M. (2021). Small talk with a robot? The impact of dialog content, talk initiative, and gaze behavior of a social robot on trust, acceptance, and proximity.International Journal of Social Robotics, 13(6), 1485-1498. [7] Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition.Journal of Experimental Psychology: General, 148(10), 1782-1801. [8] Banks, J. (2021). Good robots, bad robots: Morally valenced behavior effects on perceived mind, morality, and trust.International Journal of Social Robotics, 13(8), 2021-2038. [9] Bartneck C., Kanda T., Mubin O., & Al Mahmud A. (2009). Does the design of a robot influence its animacy and perceived intelligence?International Journal of Social Robotics, 1(2), 195-204. [10] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. [11] Bonezzi A., Ostinelli M., & Melzner J. (2022). The human black-box: The illusion of understanding human better than algorithmic decision-making.Journal of Experimental Psychology: General, 151(9), 2250-2258. [12] Brendel A. B., Mirbabaie M., Lembcke T.-B., & Hofeditz L. (2021). Ethical management of artificial intelligence.Sustainability, 13(4), 1974. [13] Cameron D., de Saille S., Collins E. C., Aitken J. M., Cheung H., Chua A., .. Law J. (2021). The effect of social-cognitive recovery strategies on likability, capability and trust in social robots.Computers in Human Behavior, 114, 106561. [14] Clarke, R. (1994). Asimov's laws of robotics: Implications for information technology.Computer, 26(12), 53-61. [15] Cominelli L., Feri F., Garofalo R., Giannetti C., Meléndez-Jiménez M. A., Greco A., .. Kirchkamp O. (2021). Promises and trust in human-robot interaction.Scientific Reports, 11, 9687. [16] Etemad-Sajadi R., Soussan A., & Schöpfer T. (2022). How ethical issues raised by human-robot interaction can impact the intention to use the robot?International Journal of Social Robotics, 14, 1103-1115. [17] Fan L., Scheutz M., Lohani M., McCoy M., & Stokes C. (2017). Do we need emotionally intelligent artificial agents? First results of human perceptions of emotional intelligence in humans compared to robots. In J. Beskow, C. Peters, G. Castellano, C. O'Sullivan, L. Leite, & S. Kopp (Eds.), Lecture notes in computer Science: Vol. 10498: Intelligent virtual agents(pp. 129-141). Springer. [18] Fu C., Zhang Z., He J. Z., Huang S. L., Qiu J. Y., & Wang Y. W. (2018). Brain dynamics of decision-making in the generalized trust game: Evidence from ERPs and EEG time-frequency analysis.Acta Psychologica Sinica, 50(3), 317-326. [付超, 张振, 何金洲, 黄四林, 仇剑崟, 王益文. (2018). 普遍信任博弈决策的动态过程——来自脑电时频分析的证据.心理学报, 50(3), 317-326.] [19] Gamez P., Shank D. B., Arnold C., & North M. (2020). Artificial virtue: The machine question and perceptions of moral character in artificial moral agents.AI & Society, 35(4), 795-809. [20] Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception.Science, 315(5812), 619. [21] Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley.Cognition, 125(1), 125-130. [22] Haring K. S., Matsumoto Y., & Watanabe K. (2013, October). How do people perceive and trust a lifelike robot? In 2013 World Congress on Engineering and Computer Science(pp. 425-430). San Francisco, California, United States. [23] Haslam, N. (2006). Dehumanization: An integrative review.Personality and Social Psychology Review, 10(3), 252-264. [24] IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with artificial intelligence and autonomous systems. Retrieved May 20, 2022, from https://ieeexplore.ieee.org/document/8058187 [25] Johnson, A. M., & Axinn, S. (2013). The morality of autonomous robots.Journal of Military Ethics, 12(2), 129-141. [26] Judd C. M., Kenny D. A., & McClelland G. H. (2001). Estimating and testing mediation and moderation in within-subject designs.Psychological Methods, 6, 115-134. [27] Kaminka G. A., Spokoini-Stern R., Amir Y., Agmon N., & Bachelet I. (2017). Molecular robots obeying Asimov’s three laws of robotics.Artificial Life, 23(3), 343-350. [28] Khavas Z. R., Ahmadzadeh S. R., & Robinette P. (2020). Modeling trust in human-robot interaction: A survey. In A. R. Wagner, D. Feil-Seifer, K. S. Haring, S. Rossi, T. Williams, H. He, & S. Sam Ge (Eds.), Lecture notes in computer science: Vol. 12483: Social robotics(pp. 529-541). Springer. [29] Krueger, J. (2000). The projective perception of the social world. In J. Suls & L. Wheeler (Eds.), The Springer series in social clinical psychology: Handbook of social comparison(pp. 323-351). Springer. [30] Laakasuo M., Palomäki J., & Köbis N. (2021). Moral uncanny valley: A robot’s appearance moderates how its decisions are judged.International Journal of Social Robotics, 13(7), 1679-1688. [31] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.Human Factors, 46(1), 50-80. [32] Madhavan, P., & Wiegmann, D. A. (2004). A new look at the dynamics of human-automation trust: Is trust in humans comparable to trust in machines?Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 581-585. [33] Malle B. F.,& Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S. Nam, & J. B. Lyons (Eds.), Trust in Human-Robot Interaction (pp. 3-25). Academic Press. [34] Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp.117-124). Portland, Oregon, United States. [35] Malle B. F., Scheutz M., Forlizzi J., & Voiklis J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction(pp. 125-132). Christchurch, New Zealand. [36] Maninger, T., & Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations.Computers in Human Behavior Reports, 5, 100154. [37] Milli S., Hadfield-Menell D., Dragan A., & Russell S. (2017, August). Should robots be obedient? In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 4754-4760). Melbourne, Australia. [38] Montoya, A. K., & Hayes, A. F. (2017). Two-condition within-participant statistical mediation analysis: A path-analytic framework.Psychological Methods, 22(1), 6-27. [39] Mor S., Toma C., Schweinsberg M., & Ames D. (2019). Pathways to intercultural accuracy: Social projection processes and core cultural values.European Journal of Social Psychology, 49(1), 47-62. [40] Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse.Human Factors, 39(2), 230-253. [41] Reeves B.,& Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places Cambridge University Press How people treat computers, television, and new media like real people and places. Cambridge University Press. [42] Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm.Personality and Social Psychology Review, 22(1), 32-70. [43] Shank D. B., DeSanti A., & Maninger T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions.Information, Communication & Society, 22(5), 648-663. [44] Vanderelst, D., & Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition.Cognitive Systems Research, 48, 56-66. [45] Wang Y. W., Fu C., Ren X. F., Lin Y. Z., Guo F. B., Zhang Z., .. Zheng Y. W. (2017). Narcissistic personality modulates outcome evaluation in the trust game.Acta Psychologica Sinica, 49(8), 1080-1088. [王益文, 付超, 任相峰, 林羽中, 郭丰波, 张振, .. 郑玉玮. (2017). 自恋人格调节信任博弈的结果评价.心理学报, 49(8), 1080-1088.] [46] Waytz, A. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle.Journal of Experimental Social Psychology, 52, 113-117. [47] Zhao, T. Y. (2015). The forking paths for the trolley problem.Philosophical Research, 5, 96-102. [赵汀阳. (2015). 有轨电车的道德分叉.哲学研究, 5, 96-102.] [48] Zhu, J. (2013). Have experimental studies in cognitive science indicate the connsequentialism?——A reply on the attack of Joshua Greene to Kantian ethics.Academic Monthly, 45(1), 56-62. [朱菁. (2013). 认知科学的实验研究表明道义论哲学是错误的吗?——评加西华·格林对康德伦理学的攻击.学术月刊, 45(1), 56-62. ] |