Abstract
Federated learning (FL) has become a prominent paradigm for collaborative model training while ensuring data privacy. However, in resource-constrained environments, such as the Internet of Things (IoT), FL faces a distinct challenge from Lazybone attackers, who compromise system performance by providing low-quality data or conducting minimal local training to reduce their computational burden. In this article, we propose Fedeval, a novel multi-dimensional evaluation framework designed to defend against Lazybone attacks. Fedeval leverages a server-side base validation dataset and a base model to assess the quality and relevance of client contributions through gradient inversion, and it compares client-uploaded gradients with an honest baseline to detect training inconsistencies. By assigning adaptive importance scores based on client contributions, Fedeval enhances the robustness of FL by mitigating the impact of non-contributing participants. We also provide a theoretical analysis of Fedeval's convergence properties and validate its effectiveness through extensive experiments on four datasets and two attack scenarios. Our results demonstrate that Fedeval significantly accelerates convergence and improves accuracy by up to 13% compared to traditional methods.
| Original language | English |
|---|---|
| Article number | 5 |
| Journal | ACM Transactions on Sensor Networks |
| Volume | 21 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 27 Jan 2025 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keywords
- Additional Key Words and PhrasesFederated learning
- client evaluation
- cosine similarity
- federated learning attack
- gradient inversion