This paper explores how Federated Learning (FL) systems can be strengthened through the integration of Differential Privacy (DP). While FL allows multiple clients to collaboratively train a shared model without exposing raw data, model updates exchanged during training may still leak sensitive information. To address this, DP is applied using gradient clipping and Gaussian noise addition, thereby reducing the risk of privacy breaches. The study employs the Fed Avg algorithm in simulation experiments with ten clients under three noise levels (? = 0.0, 0.5, 1.0), evaluating outcomes in terms of accuracy, log loss, and an illustrative Rényi-DP privacy budget (?). Results highlight the trade-off between privacy and utility: models without noise achieve the highest accuracy but weakest privacy, moderate noise provides balanced performance, and stronger noise enhances privacy at the expense of accuracy. The findings emphasize the importance of tuning parameters such as clipping norm, noise multiplier, communication rounds, and participation rate to balance formal privacy protection with model utility. The study concludes by recommending standardized privacy accounting, randomized client participation, and task-specific parameter tuning as essential practices for securely deploying FL in sensitive domains such as healthcare, finance, and the Internet of Things.
Introduction
Federated Learning (FL) is an innovative distributed machine learning approach that keeps raw data on users’ devices, addressing privacy, security, and data ownership concerns common in centralized methods. FL enables multiple clients—such as hospitals, banks, or mobile devices—to collaboratively train a shared model by only exchanging local model updates rather than raw data, preserving sensitive information.
However, FL alone cannot fully guarantee privacy because updates can still leak information. Differential Privacy (DP) complements FL by mathematically ensuring that the participation of any individual client cannot be discerned from the shared updates. DP achieves this through two main techniques: gradient clipping (limiting each client’s update magnitude) and adding Gaussian noise to aggregated updates, balancing privacy protection and model performance.
A privacy accountant tracks cumulative privacy loss across multiple training rounds, expressed via parameters (ε, δ), enabling informed trade-offs between privacy guarantees and model utility.
The study explores how key DP parameters—the noise multiplier (σ) and clipping norm (C)—affect FL’s performance and privacy. Using simulations with logistic regression on synthetic data, it finds that moderate noise (σ = 0.5) preserves accuracy close to a non-private baseline while improving privacy, whereas strong noise (σ = 1.0) enhances privacy but reduces accuracy. The results highlight a critical balance between protecting sensitive data and maintaining useful model outcomes.
The research situates itself within current literature, referencing foundational works in FL and DP, and emphasizes practical deployment using tools like TensorFlow Privacy and Opacus. It concludes with actionable insights for tuning privacy parameters, leveraging client subsampling, and managing training rounds to optimize the privacy-utility trade-off, particularly in sensitive sectors such as healthcare and finance.
Conclusion
The conclusion highlights that a methodical and ethical approach to developing privacy-preserving distributed AI systems is to combine Federated Learning (FL) with Differential Privacy (DP). Although FL currently avoids centralizing raw data, sensitive information can still leak from gradient updates in the absence of DP. The system makes sure that no single client\'s data can be inferred with high assurance by incorporating DP methods, particularly clipping and Gaussian noise.
According to the simulation results, ? = 0.5 strikes a reasonable balance between providing a significant degree of privacy protection and preserving accuracy that is comparable to the non-DP baseline. In comparison, ? = 1.0 results in more pronounced performance deterioration but greatly increases privacy assurances. This result demonstrates the intrinsic trade-off between privacy and utility in DP-FL systems: greater utility costs correspond to stricter privacy budgets.
The conclusion emphasizes three crucial suggestions for practical deployments:
1) Rigorous Privacy Accounting: Monitoring cumulative privacy loss through the use of formal accountants like Rényi Differential Privacy (RDP) or the Moments accountant.
2) Noise Calibration: Under realistic client participation rates, ? values are carefully chosen to reach a specified privacy budget (?).
3) Utility Validation: Verifying that models continue to meet performance standards by testing them on representative datasets.
When paired with other security features like secure aggregation and strong aggregation rules, DP-FL can become a reliable method for industries like healthcare, finance, and the Internet of Things where safeguarding private user data is essential.
References
[1] “Federated Learning: A Survey on Privacy-Preserving…” by E. Collins et al., 2025 — overview of FL architectures + privacy mechanisms, including DP. arXiv
[2] “Federated Learning with Differential Privacy: An Utility-Enhanced Approach” by Kanishka Ranaweera, Dinh C. Nguyen, Pubudu N. Pathirana, David Smith, Ming Ding, Thierry Rakotoarivelo, Aruna Seneviratne (2025) — proposes a transformation + improved noise scheme to improve utility under DP. arXiv
[3] “A Survey of Federated Learning Privacy Attacks, Defenses …” by Joshua C. Zhao et al., 2024 — categorizes attacks/defenses in FL, including DP-based protections. arXiv
[4] “Differential Privacy in Federated Learning: An Evolutionary Game Theory Framework” by Z. Ni et al., 2025 — uses evolutionary game theory for analyzing strategy dynamics and privacy-utility tradeoffs in DP-FL. MDPI
[5] “Privacy and Security in Federated Learning: A Survey” by Rémi Gosselin, Loïc Vieu, Faiza Loukil, Alexandre Benoit, 2022 — comprehensive survey of privacy/security threats and defenses in FL. MDPI
[6] “Federated Learning with Differential Privacy for Breast Cancer Diagnosis Enabling Secure Data Sharing and Model Integrity” by Shubhi Shukla et al., 2025 — application of FL + DP in healthcare, real dataset, showing trade-offs. Nature
[7] “Differentially Private Federated Learning: A Systematic Review” by Jie Fu et al., 2024 — taxonomy and categorization of DP-FL based on models, guarantees, and settings. arXiv
[8] “Adaptive Differential Privacy in Asynchronous Federated Learning” by Y. Zhang, 2025 — proposes algorithms for adaptive DP in asynchronous FL for better trade-offs. ScienceDirect
[9] “A Multifaceted Survey on Privacy Preservation of Federated Learning” by S. Saha et al., 2024 — reviews different privacy solutions in FL including DP, secret sharing etc. SpringerLink
[10] “Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy” by Yifei Zhang, Dun Zeng, Jinglong Luo, Zenglin Xu, Irwin King, 2023 — covers privacy, robustness, trust in FL systems.
[11] Ogiela, M. R., & Ogiela, U. (2024). AI for Security of Distributed Systems. WSEAS Transactions on Computer Research, 12, 450-454.
[12] Ma, C., Li, J., Wei, K., Liu, B., Ding, M., Yuan, L., ... & Poor, H. V. (2023). Trusted ai in multiagent systems: An overview of privacy and security for distributed learning. Proceedings of the IEEE, 111(9), 1097-1132.
[13] Fadi, O., Karim, Z., & Mohammed, B. (2022). A survey on blockchain and artificial intelligence technologies for enhancing security and privacy in smart environments. IEEE Access, 10, 93168-93186.
[14] Basha, S. (2023). Blockchain and machine learning approaches to enhancing data privacy and securing distributed systems. International Journal of Science and Research (IJSR), 2319-2323.
[15] Jyothi, V., Sreelatha, T., Thiyagu, T. M., Sowndharya, R., & Arvinth, N. (2024). A Data Management System for Smart Cities Leveraging Artificial Intelligence Modeling Techniques to Enhance Privacy and Security. J. Internet Serv. Inf. Secur., 14(1), 37-51.
******