In this thesis, we propose an interpretable transformer being able to visualize the trend and seasonality patterns separately. Our model uses learned positional embedding as the trend patterns and learned temporal embedding as the seasonality patterns. We compute these two patterns separately with individual self-attention modules added afterward. In previous work based on the Transformer, the learned positional embedding and fixed temporal encoding are concatenated together into a self-attention module. However, this operation may bring noise since it adds heterogeneous vectors bearing different information. Unlike previous work, we use the learned temporal embedding instead of fixed temporal encoding to extract the temporal information, we model and visualize the trend and seasonality patterns separately for better performance and practical applications in the real world. Experiments on the real world datasets show that our model gives interpretable results with most state-of-the-art performances.
| Date of Award | 2021 |
|---|
| Original language | English |
|---|
| Awarding Institution | - The Hong Kong University of Science and Technology
|
|---|
| Supervisor | Tong ZHANG (Supervisor) |
|---|
Interpretable trend-seasonality pattern of transformer in time series forecasting
HUANG, Y. (Author). 2021
Student thesis: Master's thesis