Optimizing Distributed Computing Resources with Federated Learning: Task Scheduling and Communication Efficiency
Published 2025-03-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
In this paper, an optimization method of distributed computing resources based on federated learning is proposed to improve the utilization of computing resources in distributed systems, reduce communication overhead, and optimize the performance of global models. With the introduction of a federated learning framework, data does not need to be stored centrally but is trained locally on individual nodes, and the global model is optimized by aggregating local updates from each node. The experimental results show that the resource scheduling method based on federated learning is superior to traditional scheduling methods, such as polling scheduling and shortest task priority scheduling, in terms of task completion time, computing resource utilization, and system stability. In addition, federated learning optimization scheduling can still effectively reduce the communication overhead under different network conditions, showing its advantages in resource-constrained environments. Through model aggregation, the system can better deal with the heterogeneity among nodes and further improve the accuracy and robustness of the global model. The results of this study provide a new idea and method for future application in the field of distributed computing and edge computing and have important practical significance. Future research will focus on optimizing aggregation algorithms, improving model training efficiency, and solving node resource differences and communication bottlenecks that may be encountered in practical applications so as to further promote the application of federated learning in a wider range of fields.