FCER: a federated cloud-edge recommendation framework with cluster-based edge selection

Jiang Wu, Yunchao Yang, Miao Hu*, Yipeng Zhou, Di Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The traditional recommendation system provides web services by modeling user behavior characteristics, which also faces the risk of leaking user privacy. To mitigate the rising concern on privacy leakage in recommender systems, federated learning (FL) based recommendation has received tremendous attention, which can preserve data privacy by conducting local model training on clients. However, devices (e.g., mobile phones) used by clients in a recommender system may have limited capacity for computation and communication, which can severely deteriorate FL training efficiency. Besides, offloading local training tasks to the cloud can lead to privacy leakage and excessive pressure to the cloud. To overcome this deficiency, we propose a novel federated cloud-edge recommendation framework, which is called FCER, by offloading local training tasks to powerful and trusted edge servers. The challenge of FCER lies in the heterogeneity of edge servers, which makes the parameter server (PS) deployed in the cloud face difficulty in judiciously selecting edge servers for model training. To address this challenge, we divide the FCER framework into two stages. In the first pre-training stage, edge servers expose their data statistical features protected by local differential privacy (LDP) to the PS so that edge servers can be grouped into clusters. In the second training stage, FCER activates a single cluster in each communication round, ensuring that edge servers with statistical homogenization are not repeatedly involved in FL. The PS only selects a certain number of edge servers with the highest data quality in each cluster for FL. Effective metrics are proposed to dynamically evaluate the data quality of each edge server. Convergence rate analysis is conducted to show the convergence of recommendation algorithms in FCER. We also perform extensive experiments to demonstrate that FCER remarkably outperforms competitive baselines by $3.85\%-9.14\%$ on HR@10 and $1.46\%-11.77\%$ on NDCG@10.
Original languageEnglish
JournalIEEE Transactions on Mobile Computing
Early online date21 Oct 2024
DOIs
Publication statusE-pub ahead of print - 21 Oct 2024

Keywords

  • Federated learning
  • privacy protections
  • recommender system

Cite this