TY - JOUR
T1 - LDPGuard
T2 - Defenses Against Data Poisoning Attacks to Local Differential Privacy Protocols
AU - Huang, Kai
AU - Ouyang, Gaoya
AU - Ye, Qingqing
AU - Hu, Haibo
AU - Zheng, Bolong
AU - Zhao, Xi
AU - Zhang, Ruiyuan
AU - Zhou, Xiaofang
N1 - Publisher Copyright:
© 1989-2012 IEEE.
PY - 2024/7/1
Y1 - 2024/7/1
N2 - The protocols that satisfy Local Differential Privacy (LDP) enable untrusted third parties to collect aggregate information about a population without disclosing each user's privacy. In particular, each user locally encodes and perturbs his private data before sending it to the data collector, who aggregates and estimates the statistics about the population based on the collected perturbed values from individuals. Owing to their growing importance, LDP protocols have been widely studied and deployed in real-world scenarios (e.g., Chrome and Windows). However, as data poisoning attacks may be injected by attackers who introduce many fake users, the utility of the statistics is heavily poisoned. In this paper, we present a generic and extensible framework called LDPGuard to address the problem. LDPGuard provides effective defenses against data poisoning attacks to LDP protocols for frequency estimation, a basic query of most data analytics tasks. In particular, it first precisely estimates the percentage of fake users and then provides adversarial schemes to defend against particular data poisoning attacks. Experimental study on real-world and synthetic datasets demonstrates the superiority of LDPGuard compared to existing techniques.
AB - The protocols that satisfy Local Differential Privacy (LDP) enable untrusted third parties to collect aggregate information about a population without disclosing each user's privacy. In particular, each user locally encodes and perturbs his private data before sending it to the data collector, who aggregates and estimates the statistics about the population based on the collected perturbed values from individuals. Owing to their growing importance, LDP protocols have been widely studied and deployed in real-world scenarios (e.g., Chrome and Windows). However, as data poisoning attacks may be injected by attackers who introduce many fake users, the utility of the statistics is heavily poisoned. In this paper, we present a generic and extensible framework called LDPGuard to address the problem. LDPGuard provides effective defenses against data poisoning attacks to LDP protocols for frequency estimation, a basic query of most data analytics tasks. In particular, it first precisely estimates the percentage of fake users and then provides adversarial schemes to defend against particular data poisoning attacks. Experimental study on real-world and synthetic datasets demonstrates the superiority of LDPGuard compared to existing techniques.
KW - Adversarial schemes
KW - data poisoning attacks
KW - frequency estimation
KW - local differential privacy
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001245017200024
UR - https://openalex.org/W4391250906
UR - https://www.scopus.com/pages/publications/85183940827
U2 - 10.1109/TKDE.2024.3358909
DO - 10.1109/TKDE.2024.3358909
M3 - Journal Article
SN - 1041-4347
VL - 36
SP - 3195
EP - 3209
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 7
ER -