Recent advancements in deep learning have undergone rapid progress, leading to the emergence of numerous applications and research inquiries. This paper delves into two specific directions within the realm of deep learning: data privacy and sparsity. In the initial segment, we delve into privacy-preserving machine learning and introduce an innovative data publication algorithm known as NeuroMixGDP. NeuroMixGDP not only ensures differential privacy (DP) but also exhibits significantly enhanced utility when compared to the direct training of classification networks with DPSGD on CIFAR100 and MiniImagenet datasets. This highlights the advantages of employing privacy-preserving data release. Moreover, we examine membership inference attacks (MIA) in the federated learning framework. We present a novel attack algorithm that combines model information from multiple communication rounds and non-target clients, thereby substantially enhancing effectiveness. Furthermore, we evaluate existing defense mechanisms against this attack. Finally, we assess MIA on large language models, exploring various models, training methods, and the impact of differential privacy on privacy leakage. We recommend employing DP as a countermeasure against potential attacks. In the second part, we delve into a groundbreaking approach for identifying lottery networks—sparse sub-networks that offer comparable or superior generalization—by utilizing the Deep Structure Splitting Linearized Bregman Iteration (DessiLBI). DessiLBI effectively uncovers the structure of winning tickets at an early stage of training.
| Date of Award | 2024 |
|---|
| Original language | English |
|---|
| Awarding Institution | - The Hong Kong University of Science and Technology
|
|---|
| Supervisor | Yuan YAO (Supervisor) |
|---|
Studies on privacy-preserving techniques and sparsity in neural networks
LI, D. (Author). 2024
Student thesis: Doctoral thesis