Published 2025-07-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
The rise of autonomous driving technologies has prompted intensive research into intelligent decision-making systems capable of operating reliably under real-world conditions. This paper proposes a robust decision-making framework that integrates sensor fusion with deep reinforcement learning (DRL) to improve the performance of autonomous vehicles in complex urban environments. The system processes data from LiDAR, radar, and camera sensors to construct a unified environmental representation, which is then fed into a deep Q-network (DQN) to determine optimal driving actions. Experiments in a high-fidelity simulation environment demonstrate the effectiveness of the proposed framework in reducing collision rates, improving route efficiency, and maintaining real-time responsiveness, outperforming rule-based and unimodal DRL baselines. Our findings highlight the critical importance of multi-modal perception integration in conjunction with learning-based policy optimization for safe and intelligent autonomous navigation.