Adaptive Cross-Layer Attention Networks for Efficient Deep Learning on Resource-Constrained Devices
Published 2024-12-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
With the growing demand for deploying deep learning models on edge devices such as smartphones, IoT nodes, and embedded systems, balancing accuracy and computational efficiency has become a critical challenge. This paper proposes an Adaptive Cross-Layer Attention Network (ACLAN) that leverages both spatial and channel attention across multiple convolutional layers to enhance feature representation while maintaining low computational overhead. By dynamically adjusting attention weights based on hardware constraints and task requirements, ACLAN achieves competitive accuracy on standard benchmarks such as CIFAR-100 and ImageNet-Mobile while reducing FLOPs and memory footprint. The proposed architecture is further evaluated through ablation studies, real-time inference benchmarks, and cross-platform deployment analysis, demonstrating its practical value for real-world applications in smart devices and edge AI.