A new look and convergence rate of federated multitask learning with Laplacian regularization

Canh T. Dinh*, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Non-independent and identically distributed (non-IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data, such as personalized FL and federated multitask learning (FMTL), are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multitask learning. Then, we introduce a new view of the FMTL problem, which, for the first time, shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms FedU and decentralized FedU (dFedU) to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order 1/2 for nonconvex objectives. Experimentally, we show that our algorithms outperform the conventional algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.

Original languageEnglish
Pages (from-to)8075-8085
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number6
Early online date7 Dec 2022
DOIs
Publication statusPublished - Jun 2024
Externally publishedYes

Fingerprint

Dive into the research topics of 'A new look and convergence rate of federated multitask learning with Laplacian regularization'. Together they form a unique fingerprint.

Cite this