Projects per year
Abstract
Federated learning (FL) is a distributed machine learning paradigm in which clients collaboratively train models in a privacy-preserving manner. While centralized FL (CFL) suffers from single points of failure and performance bottlenecks, decentralized FL (DFL), which depends on inter-client communication, has emerged to eliminate the need of a central entity. However, due to lack of the coordination of a central server, heterogeneous data distribution across clients makes local models in DFL inclined to diverge towards their local objectives, resulting in poor model accuracy. Moreover, each client in DFL needs to communicate with multiple neighbors, yielding a heavy communication load. To tackle these challenges, we propose a novel DFL framework called DFLStar, which can improve DFL from two perspectives. First, to avoid significantly diverging towards local data, DFLStar incorporates self-knowledge distillation to enhance the local model training by assimilating knowledge from the aggregated model. Second, clients in DFLStar identify and only select the most informative neighbors (based on the last layer model similarity) for parameter exchange, thereby minimizing the communication overhead. Our experimental results on two real datasets demonstrate that DFLStar significantly improves both communication overhead and training time compared to traditional DFL algorithms while achieving a specific target accuracy. Furthermore, within a fixed training duration, DFLStar constantly obtains the highest model accuracy compared to the baselines.
Original language | English |
---|---|
Title of host publication | CIKM '24 |
Subtitle of host publication | proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 2108-2117 |
Number of pages | 10 |
ISBN (Electronic) | 9798400704369 |
DOIs | |
Publication status | Published - 2024 |
Event | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, United States Duration: 21 Oct 2024 → 25 Oct 2024 |
Conference
Conference | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 |
---|---|
Country/Territory | United States |
City | Boise |
Period | 21/10/24 → 25/10/24 |
Bibliographical note
Copyright the Author(s) 2024. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.Keywords
- Decentralized Federated Learning
- Participant Selection
- Knowledge Distillation
-
DP23: Towards Generalisable and Unbiased Dynamic Recommender Systems
Sheng, M. & Yao, L.
1/05/23 → 30/04/26
Project: Research
-
Building Intelligence into Online Video Services by Learning User Interests
29/06/18 → 28/06/21
Project: Research