Federated learning is a promising new technology in the field of IoT intelligence. However, exchanging model-related data in federated learning may leak the sensitive information of participants. To address this problem, we propose a novel privacy-preserving FL framework based on an innovative chained secure multi-party computing technique, named Chain-PPFL. Our scheme mainly leverages two mechanisms: Single-Masking mechanism which protects information exchanged between participants; Chained-Communication mechanism which enables masked information to be transferred between participants with a serial chain frame. We conduct extensive simulation-based experiments using two public datasets (MNIST and CIFAR-100) by comparing both training accuracy and leak-defence with other state-of-the-art schemes. We set two data sample distributions (IID and Non-IID) and three training models (CNN, MLP and L-BFGS) in our experiments. The experimental results demonstrate that the Chain-PPFL scheme can achieve a practical privacy-preservation (equivalent to differential privacy with approaching zero) for federated learning with some cost of communication and without impairing the accuracy and convergence speed of the training model.