Published 2025-05-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
This paper proposes a structural optimization method for large language models based on a perception-representation integration mechanism. The goal is to enhance semantic construction and contextual consistency under complex language scenarios. The method introduces perceptual feature extraction modules and perception-guided attention mechanisms. This enables dynamic semantic modeling of language input and multi-level structural-perception interaction. It addresses the disconnect between representation and structure in traditional language models. In implementation, the method integrates a perception-driven representation update strategy into the GPT architecture. It constructs a perception graph to regulate attention distribution. This design improves the model's structural expressiveness. Experiments on the WikiText-103 dataset show that the proposed method outperforms mainstream language models in key metrics, including Perplexity, BLEU, and Semantic Consistency. Additionally, a series of hyperparameter sensitivity experiments and comparative analyses of perception injection strategies are conducted. These evaluate the impact of structural components on model performance. The results confirm the stability and effectiveness of the proposed mechanism under different training configurations.