The power of bias: optimizing client selection in federated learning with heterogeneous differential privacy

Jiating Ma, Yipeng Zhou, Qi Li, Quan Z. Sheng, Laizhong Cui*, Jiangchuan Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To preserve the data privacy, the federated learning (FL) paradigm emerges in which clients only expose model gradients rather than original data for conducting model training. To enhance the protection of model gradients in FL, differentially private federated learning (DPFL) is proposed which incorporates differentially private (DP) noises to obfuscate gradients before they are exposed. Yet, an essential but largely overlooked problem in DPFL is the heterogeneity of clients' privacy requirement, which can vary significantly between clients and extremely complicates the client selection problem in DPFL. In other words, both the data quality and the influence of DP noises should be taken into account when selecting clients. To address this problem, we conduct convergence analysis of DPFL under heterogeneous privacy, a generic client selection strategy, popular DP mechanisms and convex loss. Based on convergence analysis, we formulate the client selection problem to minimize the value of loss function in DPFL with heterogeneous privacy, which is a convex optimization problem and can be solved efficiently. Accordingly, we propose the DPFL-BCS (biased client selection) algorithm. The extensive experiment results with real datasets under both convex and non-convex loss functions indicate that DPFL-BCS can remarkably improve model utility compared with the SOTA baselines.

Original languageEnglish
JournalIEEE Transactions on Dependable and Secure Computing
Early online date22 May 2025
DOIs
Publication statusE-pub ahead of print - 22 May 2025

Keywords

  • biased client selection
  • convergence rate
  • differentially private
  • Federated learning

Cite this