Vol. 3 No. 8 (2024)
Articles

Enhancing Neural Network Efficiency through Sparse Training: A Novel Approach to Resource Optimization

Published 2024-11-30

How to Cite

Elwood, F., & Vesper, I. (2024). Enhancing Neural Network Efficiency through Sparse Training: A Novel Approach to Resource Optimization. Journal of Computer Technology and Software, 3(8). Retrieved from https://ashpress.org/index.php/jcts/article/view/101

Abstract

The exponential growth of deep learning applications has led to increased demand for computational and energy resources. This paper explores a novel sparse training framework that leverages adaptive pruning and weight redistribution to optimize neural network efficiency. By systematically eliminating insignificant connections during training and reallocating resources to critical pathways, the proposed method achieves substantial reductions in computational cost and memory usage without compromising model accuracy. Experimental evaluations on benchmark datasets such as CIFAR-10, ImageNet, and NLP tasks demonstrate that the framework outperforms traditional dense training and static pruning methods in terms of efficiency and scalability. The study further discusses implications for deploying deep learning models in resource-constrained environments.