Fairness in federated learning: trends, challenges, and opportunities

Noorain Mukhtiar, Adnan Mahmood, Quan Z. Sheng*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

At the intersection of the cutting-edge technologies and privacy concerns, federated learning (FL) with its distributed architecture, stands at the forefront in a bid to facilitate collaborative model training across multiple clients while preserving data privacy. However, the applicability of FL systems is hindered by fairness concerns arising from numerous sources of heterogeneity that can result in biases and undermine a system's effectiveness, with skewed predictions, reduced accuracy, and inefficient model convergence. This survey thus explores the diverse sources of bias, including but not limited to, data, client, and model biases, and thoroughly discusses the strengths and limitations inherited within the array of the state-of-the-art techniques utilized in the literature to mitigate such disparities in the FL training process. A comprehensive overview of the several notions, theoretical underpinnings, and technical aspects associated with fairness and their adoption in FL-based multidisciplinary environments are delineated. Furthermore, the salient evaluation metrics leveraged to measure fairness quantitatively are examined. Finally, exciting open research directions that have the potential to drive future advancements in achieving fairer FL frameworks, in turn, offering a strong foundation for future research in this pivotal area are envisaged.

Original languageEnglish
Article number2400836
Number of pages21
JournalAdvanced Intelligent Systems
Early online date6 Apr 2025
DOIs
Publication statusE-pub ahead of print - 6 Apr 2025

Keywords

  • bias mitigation strategies
  • client selection
  • fairness evaluation
  • fairness-aware federated learning

Cite this