TY - JOUR
T1 - Explore and Cure
T2 - Unveiling Sample Effectiveness with Context-Aware Federated Prompt Tuning
AU - Guo, Tao
AU - Guo, Song
AU - Wang, Junxiao
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Using pre-trained vision-language models like CLIP with federated training prompts has shown great potential in federated learning (FL) by offering significant benefits in computation, communication, and privacy over existing frameworks. However, existing researches overlook the internal mechanisms underlying federated prompt tuning and comply with the traditional context-unaware tuning mechanism. Our experiments, on the other hand, demonstrate that federated prompting is a data-efficient but data-sensitive paradigm, and therefore, the samples that involved in the prompt tuning process holds significant importance. To address the above issue, we propose Context-aware Federated Prompt Tuning (CaFPT), which facilitates the retrieval process by conditioning on the examples capable of activating the most pertinent knowledge inside the pre-trained models with information theory. Moving in this direction steers the behavior of pre-trained neurons precisely and improves performance on the local task. Informative vectors are built by pruning clients' training data based on their V-usable information. The study shows that these vectors can be updated and combined through operations like FedAVG, and the resulting model's behavior is steered accordingly on multiple clients' tasks. Extensive experiments have demonstrated that informative vectors offer promising robustness, making it a simple yet effective way to enhance the performance of federated prompting.
AB - Using pre-trained vision-language models like CLIP with federated training prompts has shown great potential in federated learning (FL) by offering significant benefits in computation, communication, and privacy over existing frameworks. However, existing researches overlook the internal mechanisms underlying federated prompt tuning and comply with the traditional context-unaware tuning mechanism. Our experiments, on the other hand, demonstrate that federated prompting is a data-efficient but data-sensitive paradigm, and therefore, the samples that involved in the prompt tuning process holds significant importance. To address the above issue, we propose Context-aware Federated Prompt Tuning (CaFPT), which facilitates the retrieval process by conditioning on the examples capable of activating the most pertinent knowledge inside the pre-trained models with information theory. Moving in this direction steers the behavior of pre-trained neurons precisely and improves performance on the local task. Informative vectors are built by pruning clients' training data based on their V-usable information. The study shows that these vectors can be updated and combined through operations like FedAVG, and the resulting model's behavior is steered accordingly on multiple clients' tasks. Extensive experiments have demonstrated that informative vectors offer promising robustness, making it a simple yet effective way to enhance the performance of federated prompting.
KW - Context-aware prompt tuning
KW - prompt federated learning
KW - vision-language model
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001359244600013
UR - https://openalex.org/W4401387181
UR - https://www.scopus.com/pages/publications/85200819851
U2 - 10.1109/TMC.2024.3439864
DO - 10.1109/TMC.2024.3439864
M3 - Journal Article
SN - 1536-1233
VL - 23
SP - 14044
EP - 14054
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 12
ER -