ISSN 0439-755X
CN 11-1911/B

Acta Psychologica Sinica ›› 2022, Vol. 54 ›› Issue (9): 1076-1092.doi: 10.3724/SP.J.1041.2022.01076

• Reports of Empirical Studies • Previous Articles     Next Articles

Algorithmic discrimination causes less desire for moral punishment than human discrimination

XU Liying1, YU Feng2, PENG Kaiping3   

  1. 1School of Marxism, Tsinghua University, Beijing 100084, China
    2Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430072, China
    3Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
  • Published:2022-09-25 Online:2022-07-21

Abstract:

The application of algorithms is believed to contribute to reducing discrimination in human decision-making, but algorithmic discrimination still exists in real life. So is there a difference between folk responses to human discrimination and algorithmic discrimination? Previous research has found that people's moral outrage at algorithmic discrimination is less than that at human discrimination. Few studies, however, have investigated people's behavioral tendency towards algorithmic discrimination and human discrimination, especially whether there is a difference in their desire for moral punishment. Therefore, the present study aimed at comparing people's desire to punish algorithmic discrimination and human discrimination as well as finding the underlying mechanism and boundary conditions behind the possible difference.

To achieve the research objectives, six experiments were conducted, which involved various kinds of discrimination in daily life, including gender discrimination, educational background discrimination, ethnic discrimination and age discrimination. In Experiments 1 and 2, participants were randomly assigned to two conditions (discrimination: algorithm vs. human), and their desire for moral punishment was measured. Additionally, the mediating role of free will belief was tested in Experiment 2. To demonstrate the robustness of our findings, the underlying mechanism (i.e., free will belief) was further examined in Experiments 3 and 4. Experiment 3 was a 2 (agent: algorithm vs. human) × 2 (free will belief: high vs. low) between-subjects design, and Experiment 4 was a single-factor (agent: human vs. algorithm with free will vs. algorithm without free will) between-subjects design. Experiments 5 and 6 were conducted to test the moderating role of anthropomorphism. Specifically, participants’ tendency to anthropomorphize was measured in Experiment 5, and the anthropomorphism of algorithm was manipulated in Experiment 6.

As predicted, the present research found that compared with human discrimination, people have less desire to punish algorithmic discrimination. And the robustness of this result was demonstrated by the diversity of our stimuli and samples. In addition, we found that free will belief played a mediating role in the effect of discrimination (algorithm vs. human) on the desire to punish. That is to say, the reason why people had less desire to punish when facing algorithm discrimination was that they thought algorithms had less free will than humans. Finally, the results also demonstrated the moderating effect of anthropomorphism.

Specifically, the main statistics were as follows. In Experiment 1, an independent sample t-test revealed that desire for moral punishment in human condition (M = 5.29, SD = 0.99) was marginally significantly more than in the algorithm condition (M = 4.97, SD = 1.34), t(170) = 1.82, p = 0.073, Cohen’s d = 0.27. In Experiment 2, an independent sample t-test revealed that desire for moral punishment in human condition (M = 5.11, SD = 1.14) was significantly more than in algorithm condition (M = 4.60, SD = 1.54), t(170) = 2.44, p = 0.016, Cohen’s d = 0.38. In addition, a bootstrapping mediation analysis (model 4, 5000 iterations) showed that the effect of agent on desire for moral punishment was mediated by free will belief, b = -0.56, 95% CI = [-0.95, -0.21]. In Experiment 3, a 2 (agent: algorithm vs. human) × 2 (free will belief: high vs. low) between subject ANOVA revealed a significant effect for agent, F(1, 201) = 4.01, p = 0.047, η2 p = 0.02, such that desire for moral punishment in human condition (M = 4.59, SD = 1.46) was more than in algorithm condition (M = 4.17, SD = 1.51). We also found a marginally significant effect for free will belief, F(1, 201) = 3.83, p = 0.052, η2 p = 0.02, such that desire for moral punishment in high free will belief condition (M = 4.61, SD = 1.26) was more than in low free will belief condition (M = 4.17, SD = 1.67). Importantly, the interaction between agent and free will belief was also significant, the difference between desire for moral punishment in human condition and algorithm condition was only significant when free will belief was high, F(1, 201) = 8.19, p = 0.005, η2 p = 0.04. In Experiment 4, a one-way ANOVA revealed that condition influenced desire for moral punishment, F(2, 207) = 9.03, p < 0.001, η2 p = 0.08. Follow-up planned contrasts showed that desire for moral punishment in algorithm without free will condition (M = 3.94, SD = 1.45) was less than in algorithm with free will condition (M = 4.56, SD = 1.62) and human condition (M = 4.98, SD = 1.35), ps < 0.05. In Experiment 5, we found a significant interaction between agent and anthropomorphism, b = 0.16, SE = 0.06, t= 2.70, p = 0.008. A follow-up analysis revealed that there was no difference between conditions for people with higher tendency to anthropomorphize (p = 0.295), there was a significant difference between conditions for people with lower tendency to anthropomorphize (b = -0.57, SE = 0.12, t = -4.82, p < 0.001). In Experiment 6, a one-way ANOVA revealed that condition influenced desire for moral punishment, F(2, 204) = 12.60, p < 0.001, η2 p = 0.11. Follow-up planned contrasts showed that desire for moral punishment in human condition (M = 5.52, SD = 1.19) was more than in anthropomorphic algorithm condition (M = 4.97, SD = 1.27) and non-anthropomorphic algorithm condition (M = 4.43, SD = 1.35), and desire for moral punishment in anthropomorphic algorithm condition was also more than in non-anthropomorphic algorithm condition, ps < 0.05.

These results enrich literature regarding algorithm discrimination as well as moral punishment from the perspective of social psychology. First, this research explored people's behavioral tendency towards algorithmic discrimination by focusing on the desire for moral punishment, which contributes to a better understanding of people's responses to algorithmic discrimination. Second, the results are consistent with previous studies on people’s mind perception of artificial intelligence. Third, it adds evidence that free will has a significant impact on moral punishment.

Key words: algorithm, algorithmic discrimination, moral punishment, free will belief, anthropomorphism