In the evolving landscape of time series forecasting, TEMPO introduces a paradigm shift by bridging generative pre-trained transformers with the structural essence of temporal data. Presented at ICLR 2024, TEMPO marks a critical step toward realizing foundation models for time series β enabling zero-shot forecasting, interpretability, and multimodal integration.
Traditional time series models often struggle with generalization and lack flexibility across domains. In contrast, language models (e.g., GPT) have demonstrated remarkable adaptability through pre-training and prompt-based tuning. TEMPO asks: Can we bring this power to time series forecasting?
The answer is yes.
TEMPO reimagines the forecasting pipeline by combining:
This fusion allows the model to not only predict future values but understand and interpret the drivers behind them.
TEMPO is built upon two foundational principles:
Decomposition-Informed Representation
Using STL decomposition, time series inputs are split into trend, seasonality, and residual components. This enhances signal disentanglement and simplifies learning in the transformer architecture β which is otherwise challenged by overlapping temporal patterns.
Prompt-Based Forecasting
Instead of fine-tuning the entire model, TEMPO introduces component-specific soft prompts that guide the model using encoded temporal knowledge. Each input is prepended with learned vectors that capture semantic priors like βpredict the future trend given...β.
Input = Prompt β Trend β Seasonality β Residual
This modular structure improves adaptability across datasets β even in zero-shot scenarios.