Vol. 4 No. 9 (2025)
Articles

Integrating Context Compression and Structural Representation in Large Language Models for Financial Text Generation

Published 2025-09-30

How to Cite

Xue, P., & Yi, Y. (2025). Integrating Context Compression and Structural Representation in Large Language Models for Financial Text Generation. Journal of Computer Technology and Software, 4(9). Retrieved from https://ashpress.org/index.php/jcts/article/view/217

Abstract

This paper focuses on key challenges in long-text generation and summarization in the financial domain, including context truncation, redundancy interference, and lack of structural understanding. It proposes a large language model approach that integrates context window compression and structure-aware modeling. The method introduces an information selection mechanism to compress ultra-long input sequences. This helps reduce information loss caused by limited context windows. At the same time, it applies structural graph modeling to capture cross-sentence logical connections. This enhances the model's ability to understand multi-level structures and complex semantics in financial texts. During generation, the model conditions the decoder on the compressed, structure-enhanced representations. This guides the generation or summarization process toward outputs that are semantically consistent and linguistically fluent. The study conducts systematic experiments on representative datasets from key financial subdomains. It designs multi-dimensional analyses, including sensitivity to context length, redundancy ratio perturbation, and subdomain variation. These experiments evaluate the model's performance in terms of language quality and generation stability. The results show that the proposed method achieves strong performance on mainstream evaluation metrics such as ROUGE. It also demonstrates good stability and generalization. The model adapts well to financial documents with different structures and expression styles. By introducing smoothing mechanisms and structural regularization, the training process exhibits fast convergence and low variance. These findings confirm the effectiveness and robustness of the proposed method in modeling highly structured financial texts.