心理科学进展 ›› 2022, Vol. 30 ›› Issue (5): 1078-1092.doi: 10.3724/SP.J.1042.2022.01078
蒋路远1, 曹李梅1, 秦昕1(), 谭玲2, 陈晨1, 彭小斐1
收稿日期:
2021-07-13
出版日期:
2022-05-15
发布日期:
2022-03-24
通讯作者:
秦昕
E-mail:qinxin@sysu.edu.cn
基金资助:
JIANG Luyuan1, CAO Limei1, QIN Xin1(), TAN Ling2, CHEN Chen1, PENG Xiaofei1
Received:
2021-07-13
Online:
2022-05-15
Published:
2022-03-24
Contact:
QIN Xin
E-mail:qinxin@sysu.edu.cn
摘要:
不平等问题是全球社会和经济发展需要应对的首要挑战, 也是实现全球可持续发展目标的核心障碍。人工智能(artificial intelligence, AI)为缓解不平等、促进社会公平提供了新的途径。然而, 新近研究发现, 即使客观上AI决策具有公平性和准确性, 个体仍可能对AI决策的公平感知较低。因此, 近年来越来越多的研究开始关注AI决策公平感知的影响因素。然而, 目前研究较为分散, 呈现出研究范式不统一、理论不清晰和机制未厘清等特征。这既不利于跨学科的研究对话, 也不利于研究者和实践者对AI决策公平感知形成系统性理解。基于此, 通过系统的梳理, 现有研究可以划分为两类: (1) AI单一决策的公平感知研究, 主要聚焦于AI特征和个体特征如何影响个体对AI决策的公平感知; (2) AI-人类二元决策的公平感知研究, 主要聚焦于对比个体对AI决策与人类决策公平感知的差异。在上述梳理基础上, 未来研究可以进一步探索AI决策公平感知的情绪影响机制等方向。
蒋路远, 曹李梅, 秦昕, 谭玲, 陈晨, 彭小斐. (2022). 人工智能决策的公平感知. 心理科学进展 , 30(5), 1078-1092.
JIANG Luyuan, CAO Limei, QIN Xin, TAN Ling, CHEN Chen, PENG Xiaofei. (2022). Fairness perceptions of artificial intelligence decision-making. Advances in Psychological Science, 30(5), 1078-1092.
类别 | 文献数量 | 亚类别 | 机制 | 作者和年份 |
---|---|---|---|---|
AI特征 | 8 | 透明性 | 可理解性/需求满足 | Binns et al., |
3 | 可控性 | 需求满足 | Lee et al., | |
4 | 规则性 | 需求满足 | Chang et al., | |
5 | 适当性 | 道德直觉 | Harrison et al., | |
个体特征 | 6 | 人口统计特征 | 道德直觉/可理解性 | Grgić-Hlača et al., |
6 | 人格和价值观 | 道德直觉 | Araujo et al., |
表1 AI单一决策的公平感知研究总结
类别 | 文献数量 | 亚类别 | 机制 | 作者和年份 |
---|---|---|---|---|
AI特征 | 8 | 透明性 | 可理解性/需求满足 | Binns et al., |
3 | 可控性 | 需求满足 | Lee et al., | |
4 | 规则性 | 需求满足 | Chang et al., | |
5 | 适当性 | 道德直觉 | Harrison et al., | |
个体特征 | 6 | 人口统计特征 | 道德直觉/可理解性 | Grgić-Hlača et al., |
6 | 人格和价值观 | 道德直觉 | Araujo et al., |
类别 | 文献数量 | 机制类别 | 机制 | 作者, 年份 |
---|---|---|---|---|
机械属性vs. 社会属性 | 11 | 情感 | 情感/人情味/善意 | Helberger et al., |
互动 | 互动性/人际接触/尊重 | Acikgoz et al., | ||
简化属性vs. 复杂属性 | 5 | 去情景化 | 去情景化/定量化/ 隐性知识/简化性 | Höddinghaus et al., |
客观属性vs. 主观属性 | 6 | 一致性 | 一致性 | Howard et al., |
中立性 | 中立性 | Marcinkowski et al., | ||
责任归因 | 蓄意性归因 | 宋晓兵, 何夏楠, |
表2 AI-人类二元决策的公平感知研究总结
类别 | 文献数量 | 机制类别 | 机制 | 作者, 年份 |
---|---|---|---|---|
机械属性vs. 社会属性 | 11 | 情感 | 情感/人情味/善意 | Helberger et al., |
互动 | 互动性/人际接触/尊重 | Acikgoz et al., | ||
简化属性vs. 复杂属性 | 5 | 去情景化 | 去情景化/定量化/ 隐性知识/简化性 | Höddinghaus et al., |
客观属性vs. 主观属性 | 6 | 一致性 | 一致性 | Howard et al., |
中立性 | 中立性 | Marcinkowski et al., | ||
责任归因 | 蓄意性归因 | 宋晓兵, 何夏楠, |
[1] | 曹培杰.(2020). 人工智能教育变革的三重境界. 教育研究, 481, 143-150. |
[2] | 陈晨, 秦昕, 谭玲, 卢海陵, 周汉森, 宋博迪.(2020). 授权型领导-下属自我领导匹配对下属情绪衰竭和工作绩效的影响. 管理世界, 36(12), 145-162. |
[3] | 陈晨, 张昕, 孙利平, 秦昕, 邓惠如.(2020). 信任以稀为贵?下属感知被信任如何以及何时导致反生产行为. 心理学报, 52(3), 329-344. |
[4] | 房鑫, 刘欣.(2019). 论人工智能时代人力资源管理面临的机遇和挑战. 山东行政学院学报, 167, 104-109. |
[5] | 郭秀艳, 郑丽, 程雪梅, 刘映杰, 李林.(2017). 不公平感及相关决策的认知神经机制. 心理科学进展, 25(6), 903-911. |
[6] | 李超平, 时勘.(2003). 分配公平与程序公平对工作倦怠的影响. 心理学报, 35(5), 677-684. |
[7] | 李晔, 龙立荣, 刘亚.(2002). 组织公平感的形成机制研究进展. 人类工效学, 8(1), 38-41. |
[8] | 秦昕, 薛伟, 陈晨, 刘四维, 邓惠如.(2019). 为什么领导做出公平行为: 综述与未来研究方向. 管理学季刊, 4(4), 39-62. |
[9] | 宋晓兵, 何夏楠.(2020). 人工智能定价对消费者价格公平感知的影响. 管理科学, 33(5), 3-16. |
[10] | 王芹, 白学军, 郭龙健, 沈德立.(2012). 负性情绪抑制对社会决策行为的影响. 心理学报, 44(5), 690-697. |
[11] | 吴燕, 周晓林.(2012). 公平加工的情境依赖性: 来自ERP的证据. 心理学报, 44(6), 797-806. |
[12] | 谢洪明, 陈亮, 杨英楠.(2019). 如何认识人工智能的伦理冲突?--研究回顾与展望. 外国经济与管理, 41(10), 109-124. |
[13] | 谢小云, 左玉涵, 胡琼晶.(2021). 数字化时代的人力资源管理: 基于人与技术交互的视角. 管理世界, 37(1), 200-216+13. |
[14] | 徐鹏, 徐向艺.(2020). 人工智能时代企业管理变革的逻辑与分析框架. 管理世界, 36(1), 122-129. |
[15] | 杨文琪, 金盛华, 何苏日那, 张潇雪, 范谦.(2015). 非人化研究: 理论比较及其应用. 心理科学进展, 23(7), 1267-1279. |
[16] | 张志学, 赵曙明, 施俊琦, 秦昕, 贺伟, 赵新元, … 吴刚.(2021). 数字经济下组织管理研究的关键科学问题--第254期“双清论坛”学术综述. 中国科学基金, 35(5), 774-781. |
[17] | 郑功成.(2009). 中国社会公平状况分析--价值判断、权益失衡与制度保障. 中国人民大学学报, 23(2), 2-11. |
[18] | 周浩, 龙立荣.(2007). 公平敏感性研究述评. 心理科学进展, 15(4), 702-707. |
[19] |
Acikgoz Y., Davison K. H., Compagnone M., & Laske M.(2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416.
doi: 10.1111/ijsa.12306 URL |
[20] | Adams J. S.(1965). Inequity in social exchange. Advances in Experimental Social Psychology, 2, 267-299. |
[21] | Araujo T., Helberger N., Kruikemeier S., & de Vreese C. H.(2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611-623. |
[22] | Balasubramanian N., Ye Y., & Xu M. (in press). Substituting human decision-making with machine learning: Implications for organizational learning. Academy of Management Review. Advance online publication. https://doi.org/10.5465/amr.2019.0470 |
[23] | Barabas C., Doyle C., Rubinovitz J., & Dinakar K.(2020, January). Studying up: Reorienting the study of algorithmic fairness around issues of power. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain. |
[24] |
Bigman Y. E., & Gray K.(2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
doi: 10.1016/j.cognition.2018.08.003 URL |
[25] | Bigman Y. E., Yam K. C., Marciano D., Reynolds S. J., & Gray K. (in press). Threat of racial and economic inequality increases preference for algorithm decision-making. Computers in Human Behavior. Advance online publication. https://dx.doi.org/10.1016/j.chb.2021.106859 |
[26] | Binns R., van Kleek M., Veale M., Lyngs U., Zhao J., & Shadbolt N.(2018 April). ‘It’s reducing a human being to a percentage’; Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, Canada. |
[27] |
Brockner J., Fishman A. Y., Reb J., Goldman B., Spiegel S., & Garden C.(2007). Procedural fairness, outcome favorability, and judgments of an authority's responsibility. Journal of Applied Psychology, 92(6), 1657-1671.
pmid: 18020803 |
[28] |
Burton J. W., Stein M. K., & Jensen T. B.(2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
doi: 10.1002/bdm.2155 |
[29] | Chang M. L., Pope Z., Short E. S., & Thomaz A. L.(2020 August). Defining fairness in human-robot teams. Proceedings of 2020 29th IEEE International Conference on Robot and Human Interactive Communication, Virtual Conference. |
[30] | Cheng H. F., Stapleton L., Wang R., Bullock P., Chouldechova A., Wu Z. S. S., & Zhu H.(2021, May). Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. |
[31] |
Choi I., Koo M., & Choi J. A.(2007). Individual differences in analytic versus holistic thinking. Personality and Social Psychology Bulletin, 33(5), 691-705.
doi: 10.1177/0146167206298568 URL |
[32] |
Colquitt J. A., & Zipay K. P.(2015). Justice, fairness, employee reactions. Annual Review of Organizational Psychology and Organizational Behavior, 2(1), 75-99.
doi: 10.1146/annurev-orgpsych-032414-111457 URL |
[33] | Dalenberg D. J.(2018). Preventing discrimination in the automated targeting of job advertisements. Computer Law & Security Review, 34(3), 615-627. |
[34] | Deutsch M. 1975. Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137-149. |
[35] | Dodge J., Vera Liao Q., & Bellamy R. K. E. 2019 March). Explaining models: An empirical study of how explanations impact fairness judgment. Proceedings of the International Conference on Intelligent User Interfaces, Marina del Rey, CA. |
[36] |
Fischhoff B., & Broomell S. B.(2020). Judgment and decision making. Annual Review of Psychology, 71, 331-355.
doi: 10.1146/annurev-psych-010419-050747 pmid: 31337275 |
[37] |
Glikson E., & Woolley A. W.(2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660.
doi: 10.5465/annals.2018.0057 URL |
[38] | Graham J., Haidt J., Koleva S., Motyl M., Iyer R., Wojcik S. P., & Ditto P. H.(2013). Moral foundations theory:The pragmatic validity of moral pluralism. In P.Devine, & A.Plant(Eds.), Advances in experimental social psychology (Vol. 47, pp. 55-130). New York: Academic Press. |
[39] |
Graham J., Nosek B. A., Haidt J., Iyer R., Koleva S., & Ditto P. H.(2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101, 366-385.
doi: 10.1037/a0021847 pmid: 21244182 |
[40] |
Gray H. M., Gray K., & Wegner D. M.(2007). Dimensions of mind perception. Science, 315, 619-619.
doi: 10.1126/science.1134475 URL |
[41] | Grgić-Hlača N., Redmiles E. M., Gummadi K. P., & Weller A.(2018 April). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Proceedings of the 2018 World Wide Web Conference on World Wide Web, Lyon, France. |
[42] | Grgić-Hlača N., Weller A., & Redmiles E. M.(2020 November). Dimensions of diversity in human perceptions of algorithmic fairness. Proceedings of the CSCW 2019 Workshop on Team and Group Diversity, Austin, Texas. |
[43] | Grgić-Hlača N., Zafar M. B., Gummadi K. P., & Weller A.(2018 February). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. Proceedings of the 32th AAAI Conference on Artificial Intelligence, New Orleans, Louisiana. |
[44] |
Haidt J.(2001). The emotional dog and its rationalist tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834.
doi: 10.1037/0033-295x.108.4.814 pmid: 11699120 |
[45] | Harrison G., Hanson J., Jacinto C., Ramirez J., & Ur B.(2020 January). An empirical study on the perceived fairness of realistic, imperfect machine learning models. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, Barcelona, Spain. |
[46] | Helberger N., Araujo T., & de Vreese C. H.(2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, Article 105456. https://doi.org/10.1016/j.clsr.2020.105456 |
[47] |
Höddinghaus M., Sondern D., & Hertel G.(2021). The automation of leadership functions: Would people trust decision algorithms?. Computers in Human Behavior, 116, Article 106635. https://doi: 10.1016/j.chb.2020.106635
doi: 10.1016/j.chb.2020.106635 |
[48] | Howard F. M., Gao C. A., & Sankey C.(2020). Implementation of an automated scheduling tool improves schedule quality and resident satisfaction. Plos One, 15(8), Article e0236952. https://doi.org/10.1371/journal.pone.0236952 |
[49] | Htun N. N., Lecluse E., & Verbert K.(2021, April). Perception of fairness in group music recommender systems. In 26th International Conference on Intelligent User Interfaces, College Station, TX, USA. |
[50] | Hutchinson B., & Mitchell M.(2019, January). 50 years of test (un) fairness: Lessons for machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA. |
[51] | Kaibel C., Koch-Bayram I., Biemann T., & Mühlenbock M.(2019 July). Applicant perceptions of hiring algorithms- uniqueness and discrimination experiences as moderators. Proceedings of the Academy of Management Annual Meeting, Briarcliff Manor, NY. |
[52] |
Karam E. P., Hu J., Davison R. B., Juravich M., Nahrgang J. D., Humphrey S. E., & Scott DeRue D.(2019). Illuminating the ‘face’ of justice: A meta-analytic examination of leadership and organizational justice. Journal of Management Studies, 56(1), 134-171.
doi: 10.1111/joms.12402 URL |
[53] | Kasinidou M., Kleanthous S., Barlas P., & Otterbacher J.(2021, March). I agree with the decision, but they didn't deserve this: Future developers' perception of fairness in algorithmic decisions. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada. |
[54] | Langer M., König C. J., Back C., & Hemsing V. (in press). Trust in artificial intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. PsyArXiv Prepints. https://doi.org/10.31234/osf.io/r9y3t |
[55] |
Langer M., König C. J., & Fitili A.(2018). Information as a double-edged sword: The role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Computers in Human Behavior, 81, 19-30.
doi: 10.1016/j.chb.2017.11.036 URL |
[56] |
Langer M., König C. J., & Papathanasiou M.(2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217-234.
doi: 10.1111/ijsa.12246 URL |
[57] |
Langer M., König C. J., Sanchez D. R. P., & Samadi S.(2019). Highly automated interviews: Applicant reactions and the organizational context. Journal of Managerial Psychology, 35(4), 301-314.
doi: 10.1108/JMP-09-2018-0402 URL |
[58] |
Lee M. K.(2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), Article 2053951718756684. https://doi: 10.1177/2053951718756684
doi: 10.1177/2053951718756684 |
[59] | Lee M. K., & Baykal S.(2017 February). Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. Portland, OR. |
[60] | Lee M. K., Jain A., Cha H. J., Ojha S., & Kusbit D.(2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3, 182-208. |
[61] | Lee M. K., & Rich K.(2021 May). Who is included in human perceptions of AI? Trust and perceived fairness around healthcare AI and cultural mistrust. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan. |
[62] | Leventhal G. S.(1976). The distribution of rewards and resources in groups and organizations. In L. Berkowitz, & E. Walster(Eds.), Advances in experimental social psychology (Vol. 9, pp. 91-131). New York: Academic Press. |
[63] | Lind E. A.(2001). Fairness heuristic theory:Justice judgments as pivotal cognitions in organizational relations. In J.Greenberg & R. Cropanzano(Eds.), Advances in organizational justice, (Vol. 1, pp. 56-88). Stanford, CA: Stanford university press. |
[64] | Lindebaum D., & Ashraf M. (in press). The ghost in the machine, or the ghost in organizational theory? A complementary view on the use of machine learning. Academy of Management Review. https://doi.org/10.5465/amr.2021.0036 |
[65] |
Lindebaum D., Vesa M., & den Hond F.(2020). Insights from “The Machine Stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45(1), 247-263.
doi: 10.5465/amr.2018.0181 URL |
[66] | Loehr A. Big data for HR: Can predictive analytics help decrease discrimination in the workplace? The Huffington Post. Retrieved March 23, 2015, from https://www.huffingtonpost. |
[67] |
Longoni C., Bonezzi A., & Morewedge C. K.(2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
doi: 10.1093/jcr/ucz013 |
[68] | Marcinkowski F., Kieslich K., Starke C., & Lünich M.(2020 January). Implications of AI (un-) fairness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice and organizational reputation. the Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Barcelona, Spain. |
[69] |
Martínez-Miranda J., & Aldea A.(2005). Emotions in human and artificial intelligence. Computers in Human Behavior, 21(2), 323-341.
doi: 10.1016/j.chb.2004.02.010 URL |
[70] |
Miller S. M., & Keiser L. R.(2021). Representative bureaucracy and attitudes toward automated decision making. Journal of Public Administration Research and Theory, 31(1), 150-165.
doi: 10.1093/jopart/muaa019 URL |
[71] |
Nagtegaal R.(2021). The impact of using algorithms for managerial decisions on public employees' procedural justice. Government Information Quarterly, 38(1). Article 101536. https://doi: 10.1016/j.giq.2020.101536
doi: 10.1016/j.giq.2020.101536 |
[72] |
Nass C., & Moon Y.(2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
doi: 10.1111/0022-4537.00153 URL |
[73] |
Newman D. T., Fast N. J., & Harmon D. J.(2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149-167.
doi: 10.1016/j.obhdp.2020.03.008 URL |
[74] |
Nisbett R. E., Peng K., Choi I., & Norenzayan A.(2001). Culture and systems of thought: Holistic versus analytic cognition. Psychological Review, 108(2), 291-310.
pmid: 11381831 |
[75] | Noble S. M., Foster L. L., & Craig S. B.(2021). The procedural and interpersonal justice of automated application and resume screening. International Journal of Selection and Assessment, Advance online publication. https://doi.org/10.1111/ijsa.12320 |
[76] | Nørskov S., Damholdt M. F., Ulhøi J. P., Jensen M. B., Ess C., & Seibt J.(2020). Applicant fairness perceptions of a robot-mediated job interview: A video vignette-based experimental survey. Frontiers in Robotics and AI, 7, Article 586263. https://doi.org/10.3389/frobt.2020.586263 |
[77] | Nyarko J., Goel S., & Sommers R.(2020 October). Breaking taboos in fair machine learning: An experimental study. (Unpublished doctorial dissertation). Stanford University. |
[78] |
Ötting, S.K., & Maier, G.W.(2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27-39.
doi: 10.1016/j.chb.2018.07.022 URL |
[79] | Pierson E.(2017). Gender differences in beliefs about algorithmic fairness. arXiv preprint. http://arxiv.org/abs/1712.09124v2 |
[80] | Pierson E.(2018). Demographics and discussion influence views on algorithmic fairness. arXiv preprint. http://arxiv.org/abs/1712.09124 |
[81] | Plane A. C., Redmiles E. M., Mazurek M. L., Tschantz M. C., & Assoc U.(2018 August). Exploring user perceptions of discrimination in online targeted advertising. Proceedings of the 26th USENIX Security Symposium, Vancouver, BC. |
[82] | Qin X., Chen C., Yam K. C., Cao L., Li W., Guan J., Zhao P., Dong X., & Lin Y.(2022). Adults still can’t resist: A social robot can induce normative conformity. Computers in Human Behavior, 127. Article 107041. https://doi.org/10.1016/j.chb.2021.107041 |
[83] |
Qin X., Huang M., Johnson R. E., Hu Q., & Ju D.(2018). The short-lived benefits of abusive supervisory behavior for actors: An investigation of recovery and work engagement. Academy of Management Journal, 61(5), 1951-1975.
doi: 10.5465/amj.2016.1325 URL |
[84] |
Qin X., Ren R., Zhang Z., & Johnson R. E.(2015). Fairness heuristics and substitutability effects: Inferring the fairness of outcomes, procedures, and interpersonal treatment when employees lack clear information. Journal of Applied Psychology, 100(3), 749-766.
doi: 10.1037/a0038084 URL |
[85] |
Qin X., Ren R., Zhang Z., & Johnson R. E.(2018). Considering self-interests and symbolism together: How instrumental and value-expressive motives interact to influence supervisors’ justice behavior. Personnel Psychology, 71(2), 225-253.
doi: 10.1111/peps.12253 URL |
[86] | Qin X., Yam K. C., Chen C., & Li W.(2021). Revisiting social robots and their impacts on conformity: Practical and ethical considerations. Science Robotics, eLetters. Retrieved October 25, 2021, |
[87] |
Rupp D. E., & Cropanzano R.(2002). The mediating effects of social exchange relationships in predicting workplace outcomes from multifoci organizational justice. Organizational Behavior and Human Decision Processes, 89(1), 925-946.
doi: 10.1016/S0749-5978(02)00036-5 URL |
[88] | Saha D., Schumann C., Mcelfresh D., Dickerson J., Mazurek M., & Tschantz M.(2020, November). Measuring non- expert comprehension of machine learning fairness metrics. Proceedings of International Conference on Machine Learning, Online Conference. |
[89] | Saxena N. A., Huang K., DeFilippis E., Radanovic G., Parkes D. C., & Liu Y.(2020). How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Artificial Intelligence, 283, Article 103238. https://doi.org/10.1145/3306618.3314248 |
[90] |
Schein C., & Gray K.(2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32-70.
doi: 10.1177/1088868317698288 URL |
[91] | Schlicker N., Langer M., Ötting S., Baum K., König C. J., & Wallach D. (in press). What to expect from opening up ‘Black Boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior. Advance online publication. https://doi.org/10.1016/j.chb.2021.106837 |
[92] | Schoeffer J., Machowski Y.,Kuehl N. (2021 April). A study on fairness and trust perceptions in automated decision making. Proceedings of the ACM IUI 2021 Workshops, College Station, USA. |
[93] |
Shin D.(2010). The effects of trust, security and privacy in social networking: A security-based approach to understand the pattern of adoption. Interacting with Computers, 22(5), 428-438.
doi: 10.1016/j.intcom.2010.05.001 URL |
[94] | Shin D.(2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541- 565. |
[95] |
Shin D.(2021a). A cross-national study on the perception of algorithm news in the East and the West. Journal of Global Information Management, 29(2), 77-101.
doi: 10.4018/JGIM.2021030105 URL |
[96] | Shin D.(2021b). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, Article 102551. https://doi.org/10.1016/j.ijhcs.2020.102551 |
[97] | Shin D. (in press). The perception of humanness in conversational journalism: An algorithmic information- processing perspective. New Media & Society. Advance online publication. https://doi.org/10.1177/146144482199 3801 |
[98] |
Shin D., & Park Y. J.(2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277-284.
doi: 10.1016/j.chb.2019.04.019 URL |
[99] | Smith Y. N.(2020). The African American perception of body-worn cameras on police performance and fairness (Unpublished doctoral dissertation), Capella University, Minneapolis. |
[100] | Srivastava M., Heidari H., & Krause A.(2019 August). Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Anchorage, AK, USA. |
[101] |
Suen H. Y., Chen Y. C., & Lu S. H.(2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98, 93-101.
doi: 10.1016/j.chb.2019.04.012 URL |
[102] | Sundar S. S.(2008). The MAIN model:A heuristic approach to understanding technology effects on credibility. In M. J.Metzger, & FlanaginA. J.(Eds.) Digital media, youth, and credibility (pp.73-100). Cambridge, MA: MIT Press. |
[103] | Tene O., & Polonetsky J.(2015). A theory of creepy: Technology, privacy, and shifting social norms. Yale Journal of Law and Technology, 16(1), 59-102. |
[104] | Uhde A., Schlicker N., Wallach D. P., & Hassenzahl M.(2020 April). Fairness and decision-making in collaborative shift scheduling Systems. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI. |
[105] | van Berkel N., Goncalves J., Hettiachchi D., Wijenayake S., Kelly R. M., & Kostakos V.(2019). Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study. Proceedings of the ACM on Human- Computer Interaction, 3, 28-46. |
[106] | van Berkel N., Goncalves J., Russo D., Hosio S., Skov M. B.(2021 May). Effect of information presentation on fairness perceptions of machine learning predictors. Proceedings in CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. |
[107] |
Vinuesa R., Azizpour H., Leite I., Balaam M., Dignum V., Domisch S., … Nerini F. F.(2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), 1-10.
doi: 10.1038/s41467-019-13993-7 URL |
[108] | von Bertalanffy L.(1972). The history and status of general systems theory. Academy of Management Journal, 15, 407-426. |
[109] | Wang A. J.(2018). Procedural justice and risk-assessment algorithms. SSRN Electronic Journal. Article 3170136. http://dx.doi.org/10.2139/ssrn.3170136 |
[110] | Wang R., Harper F. M., & Zhu H.(2020 April). Factors influencing perceived fairness in algorithmic decision- making: Algorithm outcomes, development procedures, and individual differences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI. |
[111] | World Social Report. (2020). Inequality in a rapidly changing world. Retrieved March 23, 2020, |
[1] | 杨智超, 王艇. 消费决策中的零:零价格效应和零比较效应[J]. 心理科学进展, 2023, 31(3): 492-506. |
[2] | 谢才凤, 邬家骅, 许丽颖, 喻丰, 张语嫣, 谢莹莹. 算法决策趋避的过程动机理论[J]. 心理科学进展, 2023, 31(1): 60-77. |
[3] | 江丹莹, 杨运梅, 李晶. 群体情境下儿童的分配公平性[J]. 心理科学进展, 2022, 30(9): 2004-2019. |
[4] | 林浇敏, 李爱梅, 周雅然, 何军红, 周蕾. 眼动操纵技术在决策研究中的应用前景:改变决策行为[J]. 心理科学进展, 2022, 30(8): 1794-1803. |
[5] | 杜棠艳, 胡小勇, 杨静, 李兰玉, 王甜甜. 低社会经济地位与跨期决策:威胁视角下的心理转变机制[J]. 心理科学进展, 2022, 30(8): 1894-1904. |
[6] | 钟越, 车敬上, 刘楠, 安薪如, 李爱梅, 周国林. 压力下一搏:压力如何影响个体风险寻求[J]. 心理科学进展, 2022, 30(6): 1303-1316. |
[7] | 曾昭携, 白洁, 郭永玉, 张跃, 顾玉婷. 越富有越不支持再分配?——社会阶层与再分配偏向的关系及其心理机制[J]. 心理科学进展, 2022, 30(6): 1336-1349. |
[8] | 邓尧, 王梦梦, 饶恒毅. 风险决策研究中的仿真气球冒险任务[J]. 心理科学进展, 2022, 30(6): 1377-1392. |
[9] | 张语嫣, 许丽颖, 喻丰, 丁晓军, 邬家骅, 赵靓. 算法拒绝的三维动机理论[J]. 心理科学进展, 2022, 30(5): 1093-1105. |
[10] | 毕翠华, 齐怀远. 时间感知在跨期决策中的作用——时间决策模型的新探索[J]. 心理科学进展, 2022, 30(5): 1106-1118. |
[11] | 任赫, 黄颖诗, 陈平. 计算机化分类测验终止规则的类别、特点及应用[J]. 心理科学进展, 2022, 30(5): 1168-1182. |
[12] | 邓士昌, 许祺, 张晶晶, 李象千. 基于心灵知觉理论的AI服务用户接受机制及使用促进策略[J]. 心理科学进展, 2022, 30(4): 723-737. |
[13] | 张姝玥, 黄骏青, 赵峰, 徐科朋. 社会排斥影响跨期决策的心理机制探讨[J]. 心理科学进展, 2022, 30(3): 486-498. |
[14] | 何贵兵, 陈诚, 何泽桐, 崔力丹, 陆嘉琦, 宣泓舟, 林琳. 智能组织中的人机协同决策:基于人机内部兼容性的研究探索[J]. 心理科学进展, 2022, 30(12): 2619-2627. |
[15] | 孙海龙, 安薪如, 李爱梅, 赖慧燕, 李泽虹. 动机冲突影响混合跨期决策:趋近-回避动机理论视角[J]. 心理科学进展, 2022, 30(12): 2628-2638. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||