FedDPG: an adaptive yet efficient prompt-tuning approach in federated learning settings

Ali Shakeri, Wei Emma Zhang*, Amin Beheshti, Weitong Chen, Jian Yang, Lishan Yang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Pre-trained Language Models (PLMs) have demonstrated impressive performance in various NLP tasks. However, traditional fine-tuning methods for leveraging PLMs for downstream tasks entail significant computational overhead. Prompt-tuning has emerged as an efficient alternative that involves prepending a limited number of parameters to the input sequence and only updating them while the PLM’s parameters are frozen. However, this technique’s prompts remain fixed for all inputs, reducing the model’s flexibility. The Federated Learning (FL) technique has gained attention in recent years to address the growing concerns around data privacy. However, challenges such as communication and computation limitations of clients still need to be addressed. To mitigate these challenges, this paper introduces the Federated Dynamic Prompt Generator (FedDPG), which incorporates a dynamic prompt generator network to generate context-aware prompts based on the given input, ensuring flexibility and adaptability while prioritising data privacy in federated learning settings. Our experiments on three NLP benchmark datasets showcase that FedDPG outperforms the state-of-the-art parameter-efficient fine-tuning methods in terms of global model performance, and has significantly reduced the calculation time and the number of parameters to be sent through the FL network.

Original languageEnglish
Title of host publicationAdvances in Knowledge Discovery and Data Mining
Subtitle of host publication29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025, Sydney, NSW, Australia, June 10–13, 2025, proceedings, part V
EditorsXintao Wu, Myra Spiliopoulou, Can Wang, Vipin Kumar, Longbing Cao, Yanqiu Wu, Yu Yao, Zhangkai Wu
Place of PublicationSingapore
PublisherSpringer, Springer Nature
Pages40-51
Number of pages12
ISBN (Electronic)9789819681860
ISBN (Print)9789819681853
DOIs
Publication statusPublished - 2025
Event29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025 - Sydney, Australia
Duration: 10 Jun 202513 Jun 2025

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume15874
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025
Country/TerritoryAustralia
CitySydney
Period10/06/2513/06/25

Keywords

  • Prompt-tuning
  • Federated Learning
  • Text Classification

Fingerprint

Dive into the research topics of 'FedDPG: an adaptive yet efficient prompt-tuning approach in federated learning settings'. Together they form a unique fingerprint.

Cite this