Privacy-preserving federated learning framework based on chained secure multi-party computing

Research output: Contribution to journalArticle


Federated learning is a promising new technology in the field of IoT intelligence. However, exchanging model-related data in federated learning may leak the sensitive information of participants. To address this problem, we propose a novel privacy-preserving FL framework based on an innovative chained secure multi-party computing technique, named Chain-PPFL. Our scheme mainly leverages two mechanisms: Single-Masking mechanism which protects information exchanged between participants; Chained-Communication mechanism which enables masked information to be transferred between participants with a serial chain frame. We conduct extensive simulation-based experiments using two public datasets (MNIST and CIFAR-100) by comparing both training accuracy and leak-defence with other state-of-the-art schemes. We set two data sample distributions (IID and Non-IID) and three training models (CNN, MLP and L-BFGS) in our experiments. The experimental results demonstrate that the Chain-PPFL scheme can achieve a practical privacy-preservation (equivalent to differential privacy with approaching zero) for federated learning with some cost of communication and without impairing the accuracy and convergence speed of the training model.
Original languageEnglish
JournalIEEE Internet of Things Journal
Early online date8 Sep 2020
Publication statusE-pub ahead of print - 8 Sep 2020

Fingerprint Dive into the research topics of 'Privacy-preserving federated learning framework based on chained secure multi-party computing'. Together they form a unique fingerprint.

Cite this