P-FedAvg: parallelizing federated learning with theoretical guarantees

Zhicong Zhong, Yipeng Zhou, Di Wu*, Xu Chen, Min Chen, Chao Li, Quan Z. Sheng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

35 Citations (Scopus)

Abstract

With the growth of participating clients, the centralized parameter server (PS) will seriously limit the scale and efficiency of Federated Learning (FL). A straightforward approach to scale up the FL system is to construct a Parallel FL (PFL) system with multiple PSes. However, it is unclear whether PFL can really achieve a faster convergence rate or not. Even if the answer is yes, it is non-trivial to design a highly efficient parameter average algorithm for a PFL system. In this paper, we propose a completely parallelizable FL algorithm called P-FedAvg under the PFL architecture. P-FedAvg extends the well-known FedAvg algorithm by allowing multiple PSes to cooperate and train a learning model together. In P-FedAvg, each PS is only responsible for a fraction of total clients, but PSes can mix model parameters in a dedicatedly designed way so that the FL model can well converge. Different from heuristic-based algorithms, P-FedAvg is with theoretical guarantees. To be rigorous, we conduct theoretical analysis on the convergence rate of P-FedAvg, and derive the optimal weights for each PS to mix parameters with its neighbors. We also examine how the overlay topology formed by PSes affects the convergence rate and robustness of a PFL system. Lastly, we perform extensive experiments with real datasets to verify our analysis and demonstrate that P-FedAvg can significantly improve convergence rates than traditional FedAvg and other competitive baselines. We believe that our work can help to lay a theoretical foundation for building more efficient PFL systems.

Original languageEnglish
Title of host publicationINFOCOM 2021 - IEEE Conference on Computer Communications
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages10
ISBN (Electronic)9780738112817
DOIs
Publication statusPublished - 2021
Event40th IEEE Conference on Computer Communications, INFOCOM 2021 - Vancouver, Canada
Duration: 10 May 202113 May 2021

Publication series

NameProceedings - IEEE INFOCOM
ISSN (Print)0743-166X
ISSN (Electronic)2641-9874

Conference

Conference40th IEEE Conference on Computer Communications, INFOCOM 2021
Country/TerritoryCanada
CityVancouver
Period10/05/2113/05/21

Keywords

  • Parallel Federated Learning
  • Convergence rate
  • Network topology
  • Mixing matrix

Fingerprint

Dive into the research topics of 'P-FedAvg: parallelizing federated learning with theoretical guarantees'. Together they form a unique fingerprint.

Cite this