TY - JOUR
T1 - FCER
T2 - a federated cloud-edge recommendation framework with cluster-based edge selection
AU - Wu, Jiang
AU - Yang, Yunchao
AU - Hu, Miao
AU - Zhou, Yipeng
AU - Wu, Di
PY - 2024/10/21
Y1 - 2024/10/21
N2 - The traditional recommendation system provides web services by modeling
user behavior characteristics, which also faces the risk of leaking user
privacy. To mitigate the rising concern on privacy leakage in
recommender systems, federated learning (FL) based recommendation has
received tremendous attention, which can preserve data privacy by
conducting local model training on clients. However, devices (e.g.,
mobile phones) used by clients in a recommender system may have limited
capacity for computation and communication, which can severely
deteriorate FL training efficiency. Besides, offloading local training
tasks to the cloud can lead to privacy leakage and excessive pressure to
the cloud. To overcome this deficiency, we propose a novel federated
cloud-edge recommendation framework, which is called FCER, by offloading
local training tasks to powerful and trusted edge servers. The
challenge of FCER lies in the heterogeneity of edge servers, which makes
the parameter server (PS) deployed in the cloud face difficulty in
judiciously selecting edge servers for model training. To address this
challenge, we divide the FCER framework into two stages. In the first
pre-training stage, edge servers expose their data statistical features
protected by local differential privacy (LDP) to the PS so that edge
servers can be grouped into clusters. In the second training stage, FCER
activates a single cluster in each communication round, ensuring that
edge servers with statistical homogenization are not repeatedly involved
in FL. The PS only selects a certain number of edge servers with the
highest data quality in each cluster for FL. Effective metrics are
proposed to dynamically evaluate the data quality of each edge server.
Convergence rate analysis is conducted to show the convergence of
recommendation algorithms in FCER. We also perform extensive experiments
to demonstrate that FCER remarkably outperforms competitive baselines
by
$3.85\%-9.14\%$
on HR@10 and
$1.46\%-11.77\%$
on NDCG@10.
AB - The traditional recommendation system provides web services by modeling
user behavior characteristics, which also faces the risk of leaking user
privacy. To mitigate the rising concern on privacy leakage in
recommender systems, federated learning (FL) based recommendation has
received tremendous attention, which can preserve data privacy by
conducting local model training on clients. However, devices (e.g.,
mobile phones) used by clients in a recommender system may have limited
capacity for computation and communication, which can severely
deteriorate FL training efficiency. Besides, offloading local training
tasks to the cloud can lead to privacy leakage and excessive pressure to
the cloud. To overcome this deficiency, we propose a novel federated
cloud-edge recommendation framework, which is called FCER, by offloading
local training tasks to powerful and trusted edge servers. The
challenge of FCER lies in the heterogeneity of edge servers, which makes
the parameter server (PS) deployed in the cloud face difficulty in
judiciously selecting edge servers for model training. To address this
challenge, we divide the FCER framework into two stages. In the first
pre-training stage, edge servers expose their data statistical features
protected by local differential privacy (LDP) to the PS so that edge
servers can be grouped into clusters. In the second training stage, FCER
activates a single cluster in each communication round, ensuring that
edge servers with statistical homogenization are not repeatedly involved
in FL. The PS only selects a certain number of edge servers with the
highest data quality in each cluster for FL. Effective metrics are
proposed to dynamically evaluate the data quality of each edge server.
Convergence rate analysis is conducted to show the convergence of
recommendation algorithms in FCER. We also perform extensive experiments
to demonstrate that FCER remarkably outperforms competitive baselines
by
$3.85\%-9.14\%$
on HR@10 and
$1.46\%-11.77\%$
on NDCG@10.
KW - Federated learning
KW - privacy protections
KW - recommender system
UR - http://www.scopus.com/inward/record.url?scp=85207396344&partnerID=8YFLogxK
U2 - 10.1109/TMC.2024.3484493
DO - 10.1109/TMC.2024.3484493
M3 - Article
AN - SCOPUS:85207396344
SN - 1536-1233
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
ER -