Published 2024-12-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
This paper proposes a time series data mining method based on graph neural network and Transformer architecture, aiming to solve the challenges of modeling complex dependencies and dynamic features in multivariate time series. By introducing an adaptive adjacency matrix, the model can dynamically learn the relationship between variables and use a graph neural network to capture local dependency characteristics; combined with the multi-head attention mechanism of Transformer, the global time dependency of time series is further modeled. The experiment selected two tasks, regression and classification, for verification. The results on the power load forecasting and UCI human activity recognition datasets show that the proposed method is superior to traditional statistical models, machine learning models, and existing deep learning models in various indicators (such as MSE, MAE, accuracy, F1 value), which fully demonstrates its superior performance. In addition, the ablation experiment analysis further verifies the contribution of each key module to the model performance and demonstrates the model's ability to learn variable relationships and capture key information of time series through visual analysis. The study shows that the proposed method not only has a strong time-series mining capability but also has good generalization and robustness. In the future, we will further explore its application potential in label-scarce scenarios and real-time tasks, and improve its applicability and deployment efficiency in a wider range of fields by combining self-supervised learning and model lightweight technology.