Abstract
Recently, federated learning (FL) has received intensive research because of its ability in preserving data privacy for scattered clients to collaboratively train machine learning models. Decentralized federated learning (DFL) is upgraded from FL which allows clients to aggregate model parameters with their neighbours directly. DFL is particularly feasible for dynamic systems, in which the neighbour set of each client is dynamic. However, due to the restrictions of client trajectories and communication distances, it is hard for individual clients to sufficiently exchange models with others, resulting in poor model accuracy. To address this challenge, we propose the DFL-DMS (DFL with Diversified Model Sources) algorithm to diversify sources for model aggregation, and improve model utility. Specifically, models exchanged between DFL-DMS clients are jointly determined by their staleness scores and the bandwidth constraint. An asynchronous learning mode is adopted so that DFL-DMS clients can temporarily store and relay fresh models collected from different client sources to accelerate the propagation of rare models. A state vector is maintained to track the contribution weight of each source to its model aggregation, and an entropy based metric (EBM) is optimized by clients in a distributed manner. Finally, the superiority of DFL-DMS is evaluated by extensive experiments (with MNIST and CIFAR-10 datasets) which demonstrate that DFL-DMS can accelerate the convergence of DFL and improve the model accuracy significantly compared with the state-of-the-art baselines.
Original language | English |
---|---|
Number of pages | 14 |
Journal | IEEE Transactions on Services Computing |
DOIs | |
Publication status | E-pub ahead of print - 13 Mar 2024 |
Keywords
- Computational modeling
- Convergence
- Data models
- Data privacy
- Decentralized Federated Learning
- Federated learning
- KL Divergence
- Privacy Protection
- Training
- Vectors
- Vehicular Networks