TEMPO: Prompting Time Series Forecasting with Foundation Models

In the evolving landscape of time series forecasting, TEMPO introduces a paradigm shift by bridging generative pre-trained transformers with the structural essence of temporal data. Presented at ICLR 2024, TEMPO marks a critical step toward realizing foundation models for time series β€” enabling zero-shot forecasting, interpretability, and multimodal integration.

image.png

🧠 Why TEMPO?

Traditional time series models often struggle with generalization and lack flexibility across domains. In contrast, language models (e.g., GPT) have demonstrated remarkable adaptability through pre-training and prompt-based tuning. TEMPO asks: Can we bring this power to time series forecasting?

The answer is yes.

TEMPO reimagines the forecasting pipeline by combining:

This fusion allows the model to not only predict future values but understand and interpret the drivers behind them.


🧩 Method Overview

TEMPO is built upon two foundational principles:

  1. Decomposition-Informed Representation

    Using STL decomposition, time series inputs are split into trend, seasonality, and residual components. This enhances signal disentanglement and simplifies learning in the transformer architecture β€” which is otherwise challenged by overlapping temporal patterns.

  2. Prompt-Based Forecasting

    Instead of fine-tuning the entire model, TEMPO introduces component-specific soft prompts that guide the model using encoded temporal knowledge. Each input is prepended with learned vectors that capture semantic priors like β€œpredict the future trend given...”.

Input = Prompt βŠ• Trend βŠ• Seasonality βŠ• Residual

This modular structure improves adaptability across datasets β€” even in zero-shot scenarios.


🌍 Applications & Zero-Shot Transfer