Vol. 3 No. 5 (2024)
Articles

Adaptive Container Migration in Cloud-Native Systems via Deep Q-Learning Optimization

Published 2024-08-30

How to Cite

Zhu, W. (2024). Adaptive Container Migration in Cloud-Native Systems via Deep Q-Learning Optimization. Journal of Computer Technology and Software, 3(5). Retrieved from https://ashpress.org/index.php/jcts/article/view/197

Abstract

This paper proposes a reinforcement learning-based container migration optimization method to address the challenges of dynamism, high dimensionality, and multi-objective optimization in cloud-native environments. The migration process is modeled as a Markov Decision Process. A Deep Q-Network is used to learn the policy between system states and actions. A state feature vector is constructed to comprehensively represent resource usage, network latency, and container distribution. This guides the model to generate migration strategies with global optimality. A composite reward function is designed to balance multiple objectives. It considers load balancing, migration cost, and service latency. This ensures the model performs well across all scheduling goals. In the experimental section, a public cloud computing dataset is used to validate the model. The results show superior performance in key metrics such as resource utilization, load balancing, migration efficiency, and service delay. In addition, multiple comparative experiments and parameter sensitivity analyses are conducted. These explore the impact of key hyperparameters, such as learning rate and scheduling frequency, on system performance. The findings further demonstrate the effectiveness and stability of the proposed method in dynamic resource scheduling tasks. Through systematic modeling and policy optimization, this paper provides an adaptive and intelligent solution to the container migration problem. It supports the improvement of resource management and system responsiveness in cloud-native platforms under complex operating conditions.