Mirror world or bias-free garden: Exploring ethical goals and practices in the context of algorithmic discrimination
HUANG Yangkun YU Yayun
Author information+
Huang Yangkun is a Ph.D. student at School of Journalism and Communication, Tsinghua University of China. Email: huangyk22@mails.tsinghua.edu.cn.
Yu Yayun (corresponding author) is also a Ph.D. student at School of Journalism and Communication, Tsinghua University of China. Email: yuyy22@mails.tsinghua.edu.cn.
{{custom_zuoZheDiZhi}}
{{custom_authorNodes}}
{{custom_bio.content}}
{{custom_bio.content}}
Collapse
History+
Published
2023-10-23
Issue Date
2024-01-25
Abstract
Algorithmic discrimination poses a danger to the individual’s digital survival. Based upon semi-structural interviews with 22 algorithm engineers, this study explores how engineers’ ethical values and judgments are embedded in the discriminatory algorithm from the perspective of the social construction of technology. The study finds that algorithmic discrimination can be attributed to engineers’ understanding and pursuit of reality and accuracy—for building the mirror world, they rely on data records, social reality and statistical causality, eventually leading to the reinforcement of discrimination. Besides, for engineers, their goal of algorithmic fairness is constantly squeezed by their ethic goal of seeking reality and accuracy with certain business goals. And engineers are always educated and work in the orientation that making a bias-free digital garden isn’t an obligatory target for them. What’s more, quantifying ethical concepts like fairness remains a technical obstacle for algorithmic workers. All these make “algorithm for good” a flexible choice for engineers, finally defining their practice to debias. This study provides an empirical window for understanding algorithmic discrimination, as well as detailed and professional insights into the construction of biased technologies in Chinese context.
HUANG Yangkun YU Yayun.
Mirror world or bias-free garden: Exploring ethical goals and practices in the context of algorithmic discrimination. Chinese Journal of Journalism & Communication. 2023, 45(10): 91-111
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
References
{{custom_fnGroup.title_en}}
Footnotes
{{custom_fn.content}}
Funding
This research is supported by the Major Project of National Social Science Fund “Study on Leading Information Values in the Age of Intelligence” (No. 18ZDA307).