Published 2024-09-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
This paper addresses the issues of structural rigidity, parameter redundancy, and insufficient semantic adaptation in the fine-tuning of large language models. It proposes a structure-aware fine-tuning mechanism based on modular reconfiguration. The method freezes the backbone parameters of the original model and introduces a learnable module set along with a task-aware controller. Through structural decoupling and semantic alignment, it enables dynamic reorganization of internal structural paths and functional injection into the model. In the design, the method incorporates a module activation gating strategy and a structural consistency regularization term. These components enhance functional separation and combination stability among modules. The framework also supports structural-level dynamic adaptation under different task inputs. To evaluate its effectiveness, a series of sensitivity and robustness experiments are conducted under varying conditions, including different module counts, learning rates, input lengths, and noise levels. The experiments assess the model's performance in terms of structural adaptability, module utilization, and task alignment. Results show that the proposed method significantly improves structural generalization and input robustness while maintaining parameter efficiency. It demonstrates strong multi-task responsiveness and semantic control. This study provides a new design perspective and technical foundation for building fine-tuning frameworks for large language models that are structurally controllable and task-sensitive.