TY - JOUR
T1 - Incentive mechanism design of federated learning for recommendation systems in MEC
AU - Huang, Jiwei
AU - Ma, Bowen
AU - Wang, Ming
AU - Zhou, Xiaokang
AU - Yao, Lina
AU - Wang, Shoujin
AU - Qi, Lianyong
AU - Chen, Ying
PY - 2024/2
Y1 - 2024/2
N2 - With the rapid development of consumer electronics and communication technology, a large amount of data is generated from end users at the edge of the networks. Modern recommendation systems take full advantage of such data for training their various artificial intelligence (AI) models. However, traditional centralized model training has to transmit all the data to the cloud-based servers, which suffers from privacy leakage and resource shortage. Therefore, mobile edge computing (MEC) combined with federated learning (FL) is considered as a promising paradigm to address these issues. The smart devices can provide data and computing resources for the FL and transmit the local model parameters to the base station (BS) equipped with edge servers to aggregate into a global model. Nevertheless, due to the limited physical resources and the risk of privacy leakage, the users (the owners of the devices) would not like to participate in FL voluntarily. To address this issue, we take advantage of game theory to propose an incentive mechanism based on the two-stage Stackelberg game to inspire users to contribute computing resources for FL. We define two utility functions for the users and the BS, and formulate the utility maximization problem. Through theoretical analysis, we obtain the Nash equilibrium strategy of the users and the Stackelberg equilibrium of the utility maximization problem. Furthermore, we propose a game-based incentive mechanism algorithm (GIMA) to achieve the Stackelberg equilibrium. Finally, simulation results are provided to verify the performance of our GIMA algorithm. The experimental results show that our GIMA algorithm converges quickly, and can achieve higher utility value compared to other incentive methods.
AB - With the rapid development of consumer electronics and communication technology, a large amount of data is generated from end users at the edge of the networks. Modern recommendation systems take full advantage of such data for training their various artificial intelligence (AI) models. However, traditional centralized model training has to transmit all the data to the cloud-based servers, which suffers from privacy leakage and resource shortage. Therefore, mobile edge computing (MEC) combined with federated learning (FL) is considered as a promising paradigm to address these issues. The smart devices can provide data and computing resources for the FL and transmit the local model parameters to the base station (BS) equipped with edge servers to aggregate into a global model. Nevertheless, due to the limited physical resources and the risk of privacy leakage, the users (the owners of the devices) would not like to participate in FL voluntarily. To address this issue, we take advantage of game theory to propose an incentive mechanism based on the two-stage Stackelberg game to inspire users to contribute computing resources for FL. We define two utility functions for the users and the BS, and formulate the utility maximization problem. Through theoretical analysis, we obtain the Nash equilibrium strategy of the users and the Stackelberg equilibrium of the utility maximization problem. Furthermore, we propose a game-based incentive mechanism algorithm (GIMA) to achieve the Stackelberg equilibrium. Finally, simulation results are provided to verify the performance of our GIMA algorithm. The experimental results show that our GIMA algorithm converges quickly, and can achieve higher utility value compared to other incentive methods.
UR - http://www.scopus.com/inward/record.url?scp=85179827442&partnerID=8YFLogxK
U2 - 10.1109/TCE.2023.3342187
DO - 10.1109/TCE.2023.3342187
M3 - Article
AN - SCOPUS:85179827442
SN - 0098-3063
VL - 70
SP - 2596
EP - 2607
JO - IEEE Transactions on Consumer Electronics
JF - IEEE Transactions on Consumer Electronics
IS - 1
ER -