Loading...
ISSN 0439-755X
CN 11-1911/B

Archive

    For Selected: Toggle Thumbnails
    Special Issue on Ethical Dimensions of the Digital and Intelligence Era
    Ethical challenges in the digital and intelligence era (preface)
    ZHOU Xinyue, LIU Huijie
    2024, 56 (2):  143-145. 
    Abstract ( 185 )  
    References | Related Articles | Metrics
    Multicultural experiences enhance human altruism toward robots and the mediating role of mind perception
    TENG Yue, ZHANG Haotian, ZHAO Siqi, PENG Kaiping, HU Xiaomeng
    2024, 56 (2):  146-160.  doi: 10.3724/SP.J.1041.2024.00146
    Abstract ( 246 )   HTML ( 32 )  
    PDF (226KB) ( 209 )  

    Artificial intelligence is developing rapidly, and the future of human beings is closely related to it. The question of how humans and robots can better work together has become a pressing concern for social psychologists. Human-robot interaction is a two-way process, and research has explored how robots can better serve humans. Whether humans share the same moral concern or even act altruistically toward robots is critical, as it will feed into technological advances and the stability of human society. Few studies have examined which cultural and psychological factors shape people's willingness and behavior to accord robots moral status, i.e., whether they are believed to deserve the same rights and benefits as humans and to perform more altruistic acts toward them. Through two sub-studies and a causal study, the present work seeks to explore whether individuals' multicultural experiences enhance altruistic behavior toward robots and whether human mental perceptions of robots play a mediating role.

    Study 1a began with a cross-sectional study in which 217 valid participants with an average age of 25.64 years were collected in China through the Questionnaire Star platform to measure their multicultural experiences, altruistic behavior toward robots, mind perceptions, and demographic information. To verify the model's cross-cultural generalizability that multicultural experiences enhance altruistic behavior toward robots, Study 1b replicated the procedure of Study 1a on Mturk with 313 valid participants (mean age 33.94 years) using the English version of the questionnaire from Study 1a. Finally, to infer the causal relationship between multicultural experiences and altruistic behavior toward robots, Study 2 recruited Chinese participants with six months or more of overseas experiences to prime participants' multicultural experiences through reading and writing tasks. A total of 249 valid data were collected in Study 2, with a mean age of 25.96 years, and participants were randomly divided into a multicultural experience priming group, hometown experience priming group, and control group. After priming, participants were asked to fill out the manipulation check scale, the Mind Perception Scale, and the Altruistic Behavior Toward Robots Questionnaire, with the order of the three measures presented randomly. Finally, participants reported their information on a number of demographic variables.

    Study 1a found that individuals' multicultural experiences positively predicted altruistic behavior toward robots, with mind perceptions playing a partially mediating role (Tables 1 and 2). Study 1b found that this mediating chain was cross-culturally consistent across Chinese and Western participants, with no Chinese or Western cultural differences (Tables 3 and 4). We infer that this effect has some degree of cultural generalizability. Study 2 found that multicultural experiences were manipulated successfully, F(2, 246) = 3.65, p= 0.032, η2p = 0.29, but the main effect of multicultural experiences on altruistic behavior toward robots did not reach significance, F(2, 246) = 2.18, p = 0.120.

    The current work reveals that individuals’ multicultural experiences increase altruistic behavior toward robots. Specifically, the richer individuals‘ multicultural experiences, the more likely they are to perceive the robots as possessing mental perception. Thus, they are more likely to trigger altruistic behavior toward robots. At the same time, this effect is to some extent cross-culturally generalizable. The results of our studies enrich the theoretical predictions of multicultural experiences, identify possible “downstream effects” of multicultural experiences, and make original contributions to the study of what cultural factors enhance human altruistic behavior toward robots.

    Figures and Tables | References | Related Articles | Metrics
    The influence of cultural differences between China and the West on moral responsibility judgments of virtual humans
    YAN Xiao, MO Tiantian, ZHOU Xinyue
    2024, 56 (2):  161-178.  doi: 10.3724/SP.J.1041.2024.00161
    Abstract ( 350 )   HTML ( 15 )  
    PDF (1051KB) ( 288 )  

    Virtual humans are digital characters created in computer graphics software that take a first-person view of the world and have a social media presence. Compared with real humans, however, are people likely to attribute moral responsibility differently to virtual humans when they do something morally wrong? This important empirical question remains unanswered. Therefore, we addressed this query using Mental Perception Theory. We did so through exploring the influence and mechanism of cultural differences between China and the West on individuals’ moral responsibility judgments of virtual humans versus real humans. Findings revealed that, when virtual humans engaged in immoral behaviors—irrespective of whether real humans or artificial intelligence (AI) controlled them—people in China (vs. the West) attributed more moral responsibility to virtual humans but equal moral responsibility to real humans (Study 1a~1c). Perceived mental capacity, especially perceived experience, mediated the interaction effect of the culture differences (Study 2). Furthermore, compared with Westerners, Chinese people were more likely to punish virtual (vs. real) humans, such as by no longer following their social accounts (Study 3). The current research provided evidence for the cultural differences between Chinese people and Westerners on moral responsibility judgments of virtual humans and contributed to literature on cultural differences and the theory about moral judgments on non-human entities.

    Figures and Tables | References | Related Articles | Metrics
    “Win-win” vs. “sacrifice”: Impact of framing of ethical consumption on trust in algorithmic recommendation
    XU Lan, CHEN Quan, CUI Nan, GU Hong
    2024, 56 (2):  179-193.  doi: 10.3724/SP.J.1041.2024.00179
    Abstract ( 166 )  
    In making ethical consumption decisions, consumers need to consider the instrumental and ethical attributes of products and make tradeoffs. This complexity is a critical barrier between consumers’ ethical consumption intention and actual ethical consumption behavior. With the development of artificial intelligence, algorithms are increasingly being used to provide advice to consumers, which can reduce the difficulty of decision-making. However, studies have found that people are reluctant to adapt algorithms to make decisions in ethical trade-offs.
    This study aims to provide a potential solution for increasing consumers’ trust in algorithmic recommendations in ethical consumption contexts. In particular, this research proposes that marketers can influence consumers’ trust in algorithmic recommendations by changing the narrative framing strategy for ethical consumption. When ethical consumption is described as “win-win” (vs. “sacrifice”), consumers are more likely to trust the algorithm’s recommendations for ethical consumption decisions. The sacrifice narrative framing strategy emphasizes that consumers need to make personal sacrifices to help the environment or society in ethical consumption. The win-win narrative framing strategy emphasizes that there is no conflict of utilitarian interests between consumers and other stakeholders in ethical consumption. Moreover, this strategy encourages consumers to consider that ethical consumption is for the greater good of everyone. Therefore, the win-win (vs. sacrifice) narrative framing information will enhance consumers’ belief that achieving moral values is compatible with optimizing the overall utilitarian values of the community in ethical consumption situations. Thus, consumers are more likely to trust algorithmic tools that maximize utilitarian values to solve ethical consumption decision problems.
    By portraying ethical consumption as a win-win, consumers perceive themselves as part of a common interest community with relevant stakeholders, helping reinforce utilitarianism-based moral beliefs. Utilitarianism believes that pursuing the maximization of common interest is a rational way to achieve moral correctness. Accordingly, activating the win-win (vs. sacrifice) narrative framing encourages consumers to identify more with utilitarianism in ethical consumption decisions, thereby increasing their trust in algorithmic recommendations in ethical contexts.
    Through three experiments, this study examines the impact of ethical consumption narrative framing strategies (win-win vs. sacrifice) on trust in algorithmic consumption recommendations. Experiment 1 first tests how activating the win-win (vs. sacrifice) narrative framing affects consumers’ preference for the source of consumption recommendations. Results show that the win-win (vs. sacrifice) narrative framing significantly increases consumers’ inclination to choose algorithmic consumption recommendations. Experiment 2 further examines the influence of activating the win-win (vs. sacrifice) narrative framing on trust in consumption recommendations and the mediating role of utilitarian moral values. Results indicate that the win-win (vs. sacrifice) narrative framing enhances consumers’ trust in algorithmic consumption recommendations but does not significantly impact trust in recommendations from human experts. Consumers’ acceptance of utilitarianism in ethical decisions mediates the preceding positive effects. Lastly, this study examines the implementation boundaries of the win-win (vs. sacrifice) narrative framing strategy and finds that it only works for consumption recommendations from substitutive algorithms and has no effect on algorithm-augmented consumption recommendations from human experts.
    References | Related Articles | Metrics
    Robots abide by ethical principles promote human-robot trust? The reverse effect of decision types and the human-robot projection hypothesis
    WANG Chen, CHEN Weicong, HUANG Liang, HOU Suyu, WANG Yiwen
    2024, 56 (2):  194-209.  doi: 10.3724/SP.J.1041.2024.00194
    Abstract ( 125 )   PDF (1478KB) ( 137 )  
    Asimov's Three Laws of Robotics are the basic ethical principles of artificial intelligent robots. The ethic of robots is a significant factor that influences people’s trust in human-robot interaction. Yet how it affects people's trust, is poorly understood. In this article, we present a new hypothesis for interpreting the effect of robots’ ethics on human-robot trust—what we call the human-robot projection hypothesis (HRP hypothesis). In this hypothesis, people are based on their intelligence, e.g., intelligence for cognition, emotion, and action, to understand robots’ intelligence and interact with them. We propose that compared with robots that violate ethical principles, people project more mind energy (i.e., the level of mental capacity of humans) onto robots that abide by ethical principles, thus promoting human-robot trust.
    In this study, we conducted three experiments to explore how presenting scenarios where a robot abided by or violated Asimov’s principles would affect people’s trust in the robot. Meanwhile, each experiment corresponds to one of Asimov’s principles to explore the interaction effect of the types of robot’s decisions. Specifically, all three experiments were two by two experimental designs. The first within-subjects factor was whether the robot being interacted with had abided by Asimov’s principle with a “no harm” core element. The second within-subjects factor was the types of robot’s decision, with corresponding differences in Asimov’s principles among different experiments (Experiment 1: whether the robot takes action or not; Experiment 2: whether the robot obeys human’s order or not; Experiment 3: whether the robot protects itself or not). We assessed the human-robot trust by using the trust game paradigm.
    Experiments 1-3 consistently showed that people were more willing to trust robots that abided by ethical principles compared with those who violated. We also found that human-robot projection played a mediating role, which supports the HRP hypothesis. In addition, the significant interaction effects between the type of robot’s decision and robot abided by or violated Asimov’s principle existed in all three experiments. The results of Experiment 1 showed that action robots got more trust than inaction robots when abided by the first principle, whereas inaction robots got more trust than action robots when they violated the first principle. The results of Experiment 2 showed that disobeyed robots got less trust than obeyed robots. The detrimental effect was greater in scenarios where robots violated the second principle than in those who abided. The results of Experiment 3 showed that compared with robots that violated the third principle, the trust-promoting effect of protecting itself versus destroying itself was stronger among those who abided. The above results indicated that the reverse effects of decision types existed in both Experiments 1 and 3. Finally, the cross-experimental analysis showed that: (1) When robots abided by ethical principles, their inaction and disobedience still compromise human-robot trust. When robots violated ethical principles, their obedience incurs the least loss of human-robot trust, while their action and disobedience incur a relatively severe loss of human-robot trust. (2) When the ethical requirements of different robotic laws conflict, there was no significant difference between the importance of not harming humans and obeying human orders in terms of the human-robot trust, and both were more important than protecting robots themselves.
    This study helps to understand the impact of robotic ethical decision-making on human-robot trust and the important role of human-robot projection, which might have important implications for future research in human-robot interaction.
    References | Related Articles | Metrics
    The influence of perceived robot threat on workplace objectification
    XU Liying, WANG Xuehui, YU Feng, PENG Kaiping
    2024, 56 (2):  210-225.  doi: 10.3724/SP.J.1041.2024.00210
    Abstract ( 252 )   HTML ( 26 )  
    PDF (124KB) ( 212 )  

    With buzzwords such as “tool man”, “laborer” and “corporate slave” sweeping the workplace, workplace objectification has become an urgent topic to be discussed. With the increasing use of artificial intelligence, especially robots in the workplace, the workplace effects produced by robots are also worth paying attention to. Therefore, the present paper aims to explore whether people’s perception of robots’ threat to them will produce or aggravate workplace objectification. On the basis of reviewing the related research on workplace objectification and robot workforce, and combined with intergroup threat theory, this paper elaborates the realistic threat to human employment and security caused by robot workforce, as well as the identity threat to human identity and uniqueness. From the perspective of compensatory control theory, this paper proposes the deep mechanisms and boundary conditions of how perceiving robot threat will reduce people's sense of control, thereby stimulating the control compensation mechanism, which in turn leads to workplace objectification.

    This research is composed of eight studies. The first study includes two sub-studies, which investigate the relationship between perceived robot threat and workplace objectification through questionnaires and online experiments. This study tries to find a positive correlation and a causal association between perceived robot threat and workplace objectification. As predicted, results showed that workplace objectification was positively correlated with perceived robot realistic threat (r = 0.15, p < 0.001) and perceived robot identity threat (r = 0.18, p < 0.001) (Study 1a). In Study 1b, workplace objectification in high perceived robot threat condition (M = 3.54, SD = 1.01) was significantly more than in low perceived robot threat condition (M = 3.32, SD = 0.92), F(1, 399) = 4.94, p = 0.027, η2 p = 0.01.

    The second study comprises three sub-studies, which explore why perceived robot threat increases workplace objectification. This study aims to verify the mediating effect of control compensation (i.e., sense of control), to explain the psychological mechanism behind the effect of perceived robot threat on workplace objectification, and to repeatedly verify it through different research methods. In Study 2a, workplace objectification was positively correlated with perceived robot realistic threat (r = 0.12, p = 0.017) and perceived robot identity threat (r = 0.18, p < 0.001). In addition, a bootstrapping mediation analysis (model 4, 5000 iterations) showed that the effect of perceived robot identity threat on workplace objectification was mediated by sense of control, b = 0.02, 95%CI = [0.002, 0.038]. In Study 2b, workplace objectification in high perceived robot threat condition (M = 2.85, SD = 0.90) was significantly more than in low perceived robot threat condition (M = 2.64, SD = 0.65), F(1, 295) = 5.49, p = 0.020, η2 p = 0.02. Furthermore, a bootstrapping mediation analysis (model 4, 5000 iterations) showed that the effect of perceived robot identity threat on workplace objectification was mediated by sense of control, b = 0.11, 95% CI = [0.020, 0.228]. In Study 2c, a one-way ANOVA revealed that perceived robot threat influenced workplace objectification, F(2, 346) = 3.68, p = 0.026, η2 p = 0.02. Post-hoc pairwise comparison using Bonferroni showed that workplace objectification in perceived robot identity threat condition (M = 3.11, SD = 0.82) was significantly more than in control condition (M = 2.85, SD = 0.72), p = 0.028. Additionally, a bootstrapping mediation analysis (model 4, 5000 iterations) showed that the effect of perceived robot identity threat on workplace objectification was mediated by sense of control, b = 0.116, 95% CI = [0.027, 0.215].

    The third study also consists of three sub-studies. Based on the three compensatory control strategies proposed by the control compensation theory, in addition to affirming nonspecific structure, this study tries to further explore the moderating effect of personal agency, external agency, and specific structure. As predicted, personal agency played a moderating role in the effect of perceived robot identity threat on workplace objectification. Specifically, in low personal agency condition, perceived robot identity threat had a significant effect on workplace objectification (b = 0.57, SE = 0.17, t = 3.30, p = 0.001), while this effect was not significant in high personal agency condition (b = −0.10, SE = 0.16, t = −0.62, p = 0.536) (Study 3a). In addition, external agency also significantly moderated the relationship between perceived robot identity threat and workplace objectification. Specifically, in low external agency condition, perceived robot identity threat had a significant effect on workplace objectification (b = 0.18, SE = 0.06, t = 2.63, p = 0.004), while this effect was not significant in high personal agency condition (b = 0.01, SE = 0.06, t = 1.10, p = 0.920) (Study 3b). Similarly, Study 3c revealed that specific structure also significantly moderated the relationship between perceived robot identity threat and workplace objectification. Specifically, in low external agency condition, perceived robot identity threat had a significant effect on workplace objectification (b = 0.24, SE = 0.07, t = 3.64, p < 0.001), while the effect was not significant in high personal agency condition (b = −0.02, SE = 0.07, t = −0.27, p = 0.784).

    The main findings of this paper can be summarized as follows. First, perceived robot threat, especially identity threat, leads to an increase in workplace objectification. Second, the sense of control plays a mediating role in the effect of perceived robot threat (mainly identity threat) on workplace objectification. Specifically, the higher the perceived robot identity threat, the lower the sense of control, and the more serious the workplace objectification. Third, the other three strategies proposed by compensatory control theory, namely strengthening personal agency, supporting external agency and affirming specific structure, can moderate the effect of perceived robot threat on workplace objectification.

    The main theoretical contributions of this paper are as follows. First, it reveals the negative influence of robots on interpersonal relationships and their psychological mechanism. Second, it extends the applicability of compensatory control theory to the field of artificial intelligence by proposing and verifying that perceived robot threat increases workplace objectification through compensatory control. Third, the relationship between different compensation control strategies is discussed, and the moderating model of perceived robot threat affecting workplace objectification is proposed and verified. The main practical contributions are twofold. First, it provides insights into the anthropomorphic design of robots. Second, it helps us to better understand, anticipate and mitigate the negative social impact of robots.

    References | Related Articles | Metrics
    The interactive effect of processing fluency and information credibility on donation in the digital philanthropy context
    ZHENG Xiaoying, HAN Runlei, LIU Ruhan, XU Jing
    2024, 56 (2):  226-238.  doi: 10.3724/SP.J.1041.2024.00226
    Abstract ( 176 )   HTML ( 10 )  
    PDF (214KB) ( 158 )  

    With the development of digital technology, internet-based fundraising platforms for charitable causes have played an increasingly vital role and provided convenience for both donors and recipients. However, due to a lack of interpersonal contact and communication in the digital philanthropy context, information becomes the most important factor that shapes individuals’ donation decisions. Prior research mainly focuses on how information contentaffects donation behaviors, while insufficient attention has been paid to the role of information format in nudging donation decisions. Given this gap, we explored the impact of processing fluency on donation decisions, its underlying mechanisms, and boundary conditions in the digital philanthropy context. Based on the feelings-as-information theory and conceptual metaphor theory, we proposed that processing fluency and credibility cues interact to affect individuals’ donation decisions. Specifically, in the absence of credibility cues, processing fluency positively affects individuals’ donation intentions by enhancing their perceived credibility of the help-seeking information; in the presence of credibility cues, processing disfluency positively influences individuals’ donation intentions by increasing their perceived hardships the help seeker is suffering.

    We conducted four experiments to examine our hypotheses. Experiment 1a (N = 207) and 1b (N = 103) utilized a one-way between-subjects design (processing fluency: high vs. low). The purpose of these two experiments is to test the causal link between processing fluency and donation behaviors and the mediation role of perceived credibility of information. Experiment 2a (N = 300) and 2b (N = 406) employed a 2 (processing fluency: high vs. low) by 2 (credibility cues: yes vs. no) between-subjects design. In these two experiments, we introduced credibility cues as a moderator; apart from the mediation role of perceived information credibility, we also examined the mediation effect of individuals’ perceived hardships of the help-seeker on the relationships between processing fluency and donation behaviors. To ensure validity, we manipulated processing fluency in several ways (e.g., changing text fonts and transparency), recruited participants with different cultural backgrounds from various sources (university students, people on online survey platforms such as Prolific and Credamo), used different types of donation appeals (e.g., disabled veterans or sick children), and adopted multiple outcome measures (e.g., donation intentions, donation amount, and information sharing intentions).

    The key findings are as follows: (1) In the absence of credibility cues, there is a significant and positive relationship between processing fluency and donation intentions; individuals’ perceived information credibility mediates this relationship. More specifically, when the authenticity of information is not verified, people will use the fluency of information processing as a cue to infer the credibility of the information; the more fluent they perceive when processing the help-seeking information, the higher their perceived credibility of such information, and the greater their willingness to donate. (2) In the presence of credibility cues, there is a significant and negative relationship between processing fluency and donation decisions; individuals’ perceived hardship of the help-seeker mediates this relationship. To illustrate, when the authenticity of information is verified, people will use information processing fluency as a cue to infer hardships experienced by the help-seeker; the more disfluent the information processing, the greater their perceived hardships the help-seeker is suffering, and the higher individuals’ intention to donate.

    Given these findings, we bridge the literature on processing fluency and individual moral behaviors. Our findings also provide practical implications for relevant stakeholders (e.g., platforms, charities, policymakers, individual help-seekers). Based on our findings, digital philanthropy platforms should consider information format beyond information content when presenting help-seeking information to stimulate individuals’ prosocial behaviors.

    Figures and Tables | References | Related Articles | Metrics
    The application of artificial intelligence methods in examining elementary school students’ academic cheating on homework and its key predictors
    ZHAO Li, ZHENG Yi, ZHAO Junbang, ZHANG Rui, FANG fang, FU Genyue, LEE Kang
    2024, 56 (2):  239-254.  doi: 10.3724/SP.J.1041.2024.00239
    Abstract ( 387 )   HTML ( 23 )  
    PDF (478KB) ( 200 )  

    Background. Academic cheating has been a challenging problem for educators for centuries. It is well established that students often cheat not only on exams but also on homework. Despite recent changes in educational policy and practice, homework remains one of the most important academic tasks for elementary school students in China. However, most of the existing studies on academic cheating for the last century have focused almost exclusively on college and secondary school students, with few on the crucial elementary school period when academic integrity begins to form and develop. Further, most research has focused on cheating on exams with little on homework cheating. The present research aimed to bridge this significant gap in the literature. We used advanced artificial intelligence methods to investigate the development of homework cheating in elementary school children and the key contributing factors so as to provide the scientific basis for the development of early intervention methods to promote academic integrity and reduce cheating.

    Method. We surveyed elementary school students from Grades 2 to 6 and obtained a valid sample of 2, 098. The questionnaire included students’ self-reported cheating on homework (the dependent variable). The predictor variables included children’s ratings of (1) their perceptions of the severity of consequences for being caught cheating, (2) the extent to which they found cheating to be acceptable, and the extent to which they thought their peers considered cheating to be acceptable, (3) their perceptions of the effectiveness of various strategies adults use to reduce cheating, (4) how frequently they observed their peers engaging in cheating, and (5) several demographic variables. We used ensemble machine learning (an emerging artificial intelligence methodology) to capture the complex relations between cheating on homework and various predictor variables and used the Shapley importance values to identify the most important factors contributing to children’s decisions to cheat on homework.

    Results. Overall, 33% of elementary school students reported having cheated on homework, and the rate of such self-reported cheating behavior increased with grade (see Figure 1). The best models with the ensemble machine learning accurately predicted the students’ homework cheating with a mean Area Under the Curve (AUC) value of 80.46% (see Figure 2). The Shapley importance values showed that all predictors significantly contributed to the high performance of our computational models. However, their importance values varied significantly. Children’s cheating was most strongly predicted by their own beliefs about the acceptability of cheating (10.49%), how commonly and frequently they had observed their peers engaging in academic cheating (3.83%), and their achievement level (3.26%). Other predictors (1%-2%), such as children’s beliefs about the severity of the possible consequences of cheating (e.g., being punished by one’s teacher), whether cheating is considered acceptable by peers in general and demographic characteristics, though significantly, were not important predictors of elementary school children’s homework cheating (see Figure 3 for details).

    Conclusion. This study for the first time examined elementary school students’ homework cheating behavior. We used machine learning integration algorithms to systematically investigate the key factors contributing to elementary school students' homework cheating. The results showed that homework cheating already exists in the elementary school period and increases with grade. Advanced machine learning algorithms revealed that elementary school students' homework cheating largely depends on their acceptance of cheating, their peers' homework cheating, and their own academic performance level. The present findings advance our theoretical understanding of the early development of academic integrity and dishonesty and form the scientific basis for developing early intervention programs to reduce academic cheating. In addition, this study also shows that machine learning, as the core method of artificial intelligence, is an effective method that can be used to analyze developmental data analysis.

    Figures and Tables | References | Related Articles | Metrics