Enhancing Privacy In Federated Learning: A Comprehensive Survey Of Preservation Techniques

7 May

Authors: D Naga Bharghavi, M Deepthi, K Manga Devi, M Aswitha, P Aswitha

Abstract: Federated Learning (FL) enables multiple devices or organizations to collaboratively train machine learning models without sharing raw data, thus improving privacy. However, FL is vulnerable to privacy threats like model inversion, membership inference, and data leakage from shared updates. To mitigate these risks, several privacy-preserving techniques have been developed, including differential privacy, secure multiparty computation (SMC), homomorphic encryption (HE), and hybrid approaches that combine multiple methods. This paper offers a comprehensive analysis of these techniques, evaluating their privacy guarantees, computational costs, and impact on model accuracy. Differential privacy introduces noise to protect data but can reduce model performance. SMC allows joint computation without exposing inputs but is computationally intensive. HE enables encrypted data processing with strong security, though often at the expense of efficiency. Hybrid methods aim to balance these trade-offs by leveraging the advantages of different approaches. The study highlights key challenges such as scalability and usability in real-world FL deployments. It also identifies research gaps and proposes future directions focused on adaptive privacy mechanisms and hardware-assisted security, aiming to develop more practical and robust privacy-preserving FL systems.

DOI: https://doi.org/10.5281/zenodo.20061078