On model transmission strategies in federated learning with lossy communications

Xiaoxin Su, Yipeng Zhou, Laizhong Cui*, Jiangchuan Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)
41 Downloads (Pure)

Abstract

Recently, federated learning (FL) has received tremendous attention in both academia and industry, in which decentralized clients collaboratively complete model training by exchanging model updates with a parameter server through the Internet. Its distributed nature well utilizes the localized data and preserves clients' privacy, but also incurs heavy communication overhead. Existing studies on model update have mostly focused on the bandwidth constraint of the communication channels. Today's Internet however is highly unreliable. Simply using Transmission Control Protocol (TCP) would lead to low network utilization under frequent losses. In this paper, we closely examine the optimal transmission strategies in FL over the realistic lossy Internet. We systematically integrate model compression, forward error correction (FEC) and retransmission towards Federated Learning with Lossy Communications (FedLC). We derive the convergence rate of FedLC under non-convex loss with the optimal transmission. We then decompose this non-convex problem and present effective practical solutions. Public datasets are exploited for performance evaluation by varying the packet loss rate from 10% to 50%. In a fixed training time budget, FedLC can improve model accuracy by 3.91% on average or reduce the communication traffic by 34.27%-47.57% in comparison with state-of-the-art baselines.

Original languageEnglish
Pages (from-to)1173-1185
Number of pages13
JournalIEEE Transactions on Parallel and Distributed Systems
Volume34
Issue number4
DOIs
Publication statusPublished - Apr 2023

Fingerprint

Dive into the research topics of 'On model transmission strategies in federated learning with lossy communications'. Together they form a unique fingerprint.

Cite this