Gender bias in AI: implications for managerial practices

Ayesha Nadeem*, Olivera Marjanovic, Babak Abedin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Artificial intelligence (AI) applications are widely employed nowadays in almost every industry impacting individuals and society. As many important decisions are now being automated by various AI applications, fairness is fast becoming a vital concern in AI. Moreover, the organizational applications of AI-enabled decision systems have exacerbated this problem by amplifying the pre-existing societal bias and creating new types of biases. Interestingly, the related literature and industry press suggest that AI systems are often biased towards gender. Specifically, AI hiring tools are often biased towards women. Therefore, it is an increasing concern to reconsider the organizational managerial practices for AI-enabled decision systems to bring fairness in decision making. Additionally, organizations should develop fair, ethical internal structures and corporate strategies and governance to manage the gender imbalance in AI recruitment process. Thus, by systematically reviewing and synthesizing the literature, this paper presents a comprehensive overview of the managerial practices taken in relation to gender bias in AI. Our findings indicate that managerial practices include: better fairness governance practices, continuous training on fairness and ethics for all stakeholders, collaborative organizational learning on fairness & demographic characteristics, interdisciplinary approach & understanding of AI ethical principles, Workplace diversity in managerial roles, designing strategies for incorporating algorithmic transparency and accountability & ensuring human in the loop. In this paper, we aim to contribute to the emerging IS literature on AI by presenting a consolidated picture and understanding of this phenomenon. Based on our findings, we indicate direction for future research in IS for the better development and use of AI systems.

Original languageEnglish
Title of host publicationResponsible AI and analytics for an ethical and inclusive digitized society
Subtitle of host publication20th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2021 Galway, Ireland, September 1–3, 2021 Proceedings
EditorsDenis Dennehy, Anastasia Griva, Nancy Pouloudi, Yogesh K. Dwivedi, Ilias Pappas, Matti Mäntymäki
Place of PublicationCham
PublisherSpringer, Springer Nature
Pages259-270
Number of pages12
ISBN (Electronic)9783030854478
ISBN (Print)9783030854461
DOIs
Publication statusPublished - 2021
Event20th IFIP Conference on e-Business, e-Services and e-Society (I3E 2021) - Galway, Ireland
Duration: 1 Sep 20213 Sep 2021

Publication series

NameLecture Notes in Computer Science
Volume12896
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference20th IFIP Conference on e-Business, e-Services and e-Society (I3E 2021)
CountryIreland
CityGalway
Period1/09/213/09/21

Keywords

  • Analytics
  • Artificial Intelligence
  • Fairness
  • Gender
  • Machine learning

Fingerprint

Dive into the research topics of 'Gender bias in AI: implications for managerial practices'. Together they form a unique fingerprint.

Cite this