Published 2024-07-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
This paper proposes a differential-privacy-enhanced federated learning framework to address the challenges of privacy protection and robustness in federated learning. The study first analyzes the limitations of traditional federated learning under parameter aggregation and distribution heterogeneity, noting that relying solely on distributed modeling is insufficient to prevent data leakage and adversarial risks. In the method design, gradient clipping and noise injection are introduced to enforce differential privacy, and robust aggregation operators are employed to suppress negative impacts from malicious clients or abnormal distributions. On this basis, the framework is systematically evaluated through comparative and sensitivity experiments across dimensions such as learning rate, client sampling rate, data imbalance, and adversarial noise amplitude, using accuracy, precision, recall, and F1-Score as evaluation metrics. The results show that the proposed method maintains high utility while ensuring privacy and demonstrates stable performance in complex environments. This work not only validates the effective integration of differential privacy and robustness design but also provides a complete technical pathway for building trustworthy intelligent systems in high-risk and sensitive data scenarios. Based on this background, the integration of differential privacy and federated learning has become a research focus in recent years, as studies show that introducing differential privacy into distributed modeling can protect user data while improving system reliability under non-ideal conditions. Such integration can resist external attacks and suppress interference from malicious clients, thereby enhancing overall robustness. However, most existing work still emphasizes either privacy protection or robustness in isolation, lacking a systematic framework to optimize both simultaneously. Therefore, exploring differential-privacy-enhanced federated learning to construct more robust AI systems is not only an extension of existing research but also a necessary direction for advancing trustworthy artificial intelligence.