Explore and Cure: Unveiling Sample Effectiveness with Context-Aware Federated Prompt Tuning

Tao Guo, Song Guo*, Junxiao Wang

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

1 Citation (Scopus)

Abstract

Using pre-trained vision-language models like CLIP with federated training prompts has shown great potential in federated learning (FL) by offering significant benefits in computation, communication, and privacy over existing frameworks. However, existing researches overlook the internal mechanisms underlying federated prompt tuning and comply with the traditional context-unaware tuning mechanism. Our experiments, on the other hand, demonstrate that federated prompting is a data-efficient but data-sensitive paradigm, and therefore, the samples that involved in the prompt tuning process holds significant importance. To address the above issue, we propose Context-aware Federated Prompt Tuning (CaFPT), which facilitates the retrieval process by conditioning on the examples capable of activating the most pertinent knowledge inside the pre-trained models with information theory. Moving in this direction steers the behavior of pre-trained neurons precisely and improves performance on the local task. Informative vectors are built by pruning clients' training data based on their V-usable information. The study shows that these vectors can be updated and combined through operations like FedAVG, and the resulting model's behavior is steered accordingly on multiple clients' tasks. Extensive experiments have demonstrated that informative vectors offer promising robustness, making it a simple yet effective way to enhance the performance of federated prompting.

Original languageEnglish
Pages (from-to)14044-14054
Number of pages11
JournalIEEE Transactions on Mobile Computing
Volume23
Issue number12
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© 2002-2012 IEEE.

Keywords

  • Context-aware prompt tuning
  • prompt federated learning
  • vision-language model

Fingerprint

Dive into the research topics of 'Explore and Cure: Unveiling Sample Effectiveness with Context-Aware Federated Prompt Tuning'. Together they form a unique fingerprint.

Cite this