Federated learning can achieve the purpose of distributed machine learning without sharing privacy and sensitive data of end devices. However, high concurrent access to the server increases the transmission delay of model updates, and the local model may be an unnecessary model with the opposite gradient from the global model, thus incurring a large number of additional communication costs. To this end, we study a framework of edge-based communication optimization to reduce the number of end devices directly connected to the server while avoiding uploading unnecessary local updates. Specifically, we cluster devices in the same network location and deploy mobile edge nodes in different network locations to serve as hubs for cloud and end devices communications, thereby avoiding the latency associated with high server concurrency. Meanwhile, we propose a model cleaning method based on cosine similarity. If the value of similarity is less than a preset threshold, the local update will not be uploaded to the mobile edge nodes, thus avoid unnecessary communication. Experimental results show that compared with traditional federated learning, the proposed scheme reduces the number of local updates by 60%, and accelerates the convergence speed of the regression model by 10.3%.
|Journal||IEEE Transactions on Network Science and Engineering|
|Early online date||3 Jun 2021|
|Publication status||E-pub ahead of print - 3 Jun 2021|
Bibliographical notePublisher Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
- Collaborative work
- Communication optimization
- Computational modeling
- Data models
- Data privacy
- Federated learning
- Mobile edge nodes
- Model filtering