International Journal of Communication Publishes a Special Section on Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues
Artificial Intelligence (AI) technologies and applications have rapidly penetrated all aspects of social, political, civic, and cultural life. Despite the exponential growth in and development of AI, algorithmic bias toward gender, age, sexuality, race/ethnicity, and ideology prevails across digital media platforms, ranging from search engines through news sites to social media platforms and generative AI such as ChatGPT. This omnipresence of algorithmic bias results in social issues, such as systematic and repeated unfairness, discrimination, and inequality, privileging certain groups over others, and reinforcing social and cultural biases.
While scholarship in this domain is burgeoning, the causes, components, and consequences of algorithmic bias and ethical issues remain underexplored. Guest-edited by Seungahn Nah and Jungseock Joo, this Special Section on Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues addresses longstanding problems regarding algorithmic bias and ethical issues, affecting individuals, groups, communities, and countries across socioeconomic and ideological spectrums. In doing so, the Special Section emphasizes an inextricably interwoven relationship among data and media bias, model bias, and social bias. That is, data and media bias leads to unbalanced training resulting in model bias. Model bias, in turn, reinforces data and media bias and produces discriminatory impact on individuals and society. Human and societal bias then produces skewed representation and participation in data and media bias. This “vicious circle” reinforces bias in the development, use, and application of AI systems.
The Special Section encompasses a wide range of studies that revisit algorithmic bias in relations to gender, race, data, news, and decision-making. It empirically examines related topics such as algorithmic auditing, algorithmic framing, algorithmic aversion, and algoactivism. The Special Section further stimulates intellectual dialogues for scholars, policymakers, and practitioners to reconsider algorithmic bias in the age of AI.
We invite you to read these articles that published in the International Journal of Communication on January 1, 2024. Please log into ijoc.org to read the papers of interest. We look forward to your feedback!
Mapping Scholarship on Algorithmic Bias: Conceptualization, Empirical Results, and Ethical Concerns
Seungahn Nah, Jun Luo, Jungseock Joo
How Gender and Type of Algorithmic Group Discrimination Influence Ratings of Algorithmic Decision Making
Sonja Utz
The Realientation of the Commons: Wikidata and the Ethics of “Free” Data
Zachary J. McDowell, Matthew A. Vetter
Rage Against the Artificial Intelligence? Understanding Contextuality of Algorithm Aversion and Appreciation
Tessa Oomen, João Gonçalves, Anouk Mols
Making Algorithms Public: Reimagining Auditing From Matters of Fact to Matters of Concern
R. Stuart Geiger, Udayan Tandon, Anoolia Gakhokidze, Lian Song, Lilly Irani
How Process Experts Enable and Constrain Fairness in AI-Driven Hiring
Ignacio Fernandez Cruz
Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias
Soojong Kim, Joomi Lee, Poong Oh
Algorithmic Bias or Algorithmic Reconstruction? A Comparative Analysis Between AI News and Human News
Seungahn Nah, Jun Luo, Seungbae Kim, Mo Chen, Renee Mitson, Jungseock Joo
___________________________________________________________________________________
Silvio Waisbord, Editor
Kady Bell-Garcia, Managing Editor
Chi Zhang, Managing Editor, Special Sections
Mark Mangoba-Agustin, Webmaster
Seungahn Nah and Jungseock Joo, Guest Editors
Please note that according to the latest Google Scholar statistics, IJoC ranks 7th among all Humanities journals and 8th among all Communications journals in the world — demonstrating the viability of open access scholarly publication at the highest level