Skip to main content

a Foundation Model for Univariate Time Series Forecasting

Project description

A tutorial on how to build a Foundation Model for Univariate Time Series Forecasting

Huggingface Model Card

A transformer-based forecasting model for univariate time series. The approach mirrors Large Language Model (LLM) practices (next-token → next-patch) while remaining lightweight compared to a classic LLM and practical.

Highlights

  • Next-patch prediction objective (autoregressive, causal)
  • Patch-based representation of time series (tokens ↔ patches)
  • Causal masking self-attention with RoPE (relative positions)
  • RevIN (Reversible Instance Normalization) with causal statistics
  • SwiGLU feed-forward networks
  • Multi-quantile outputs (median + uncertainty bands)
  • Efficient rollout with KV caching

Installation

pip install patchfm

Quick Start

import torch
from patchfm.configs import PatchFMConfig
from patchfm.model import Forecaster

# --- Instantiate model ---
config = PatchFMConfig()
model = Forecaster(config)

# --- Inference ---
forecast_horizon = 64
seq = torch.randn(1, 1024)  # (batch, time)
pred_median, pred_quantiles = model(seq, forecast_horizon=forecast_horizon, quantiles=[0.1, 0.5, 0.9])  # (batch, time, quantiles)

We provide an extended quick start example in notebooks/tutorial.ipynb. If you dont have suitable hardware you can run the the extended quick start example example also in Google Colab:

Open Quick Start In Colab

Method (TL;DR)

  • Patching: Split a context signal of length $w$ into $P_{num} = w / P_{len}$ patches of length $P_{len}$.
  • RevIN: Normalize patches using causal running mean/variance over past patches, and denormalize outputs to the original scale.
  • Architecture: Input residual MLP → stacked Transformer blocks (MHA + SwiGLU FFN, pre-norm, residual) → $|\mathcal{Q}|$ output heads mapping back to patch space.
  • Positional encoding: Rotary Position Embeddings (RoPE) applied to queries/keys.
  • Training: Multi-quantile (pinball) loss across positions, elements, and quantiles $\mathcal{Q}$.
  • Inference: Predict next patch; roll out autoregressively with KV caching for long horizons.

Problem Formulation

Given context patches $x_{p_1}, \ldots, x_{p_n}$, predict the next patch $x_{p_{i+1}}$ for each position $i$ using only past patches (causality). The model outputs quantiles ${\hat{x}{p{i+1}}^{(q)}: q \in \mathcal{Q}}$ with median (q=0.5) as the point forecast.

Loss: Multi-Quantile (Pinball)

For residual $u = x - \hat{x}^{(q)}$: $$\rho_q(u) = \begin{cases} q,u, & u \ge 0,\ (q-1),u, & u < 0. \end{cases}$$ Aggregate over positions, patch elements, and quantiles.

Architecture

  • Input MLP: $\mathbb{R}^{P_{len}} \to \mathbb{R}^{dim}$ residual 2-layer MLP (ReLU)
  • Multi-Head Attention: causal mask, RoPE; queries/keys/values per head
  • FFN: SwiGLU (SiLU-gated), pre-norm + residual
  • Output heads: |Q| linear maps $\mathbb{R}^{dim} \to \mathbb{R}^{P_{len}}$ (one per quantile)

Model Details

  • Patch size: 32
  • Max context: 32 patches (1024 steps)
  • Forecast horizon: 32 steps per forward pass
  • Quantiles $\mathcal{Q}$: {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}
  • Layers: 6
  • Attention heads: 64 (head dim 32)
  • Model dim: 2048
  • Parameters: ~300M

Inference

  • Single step: predict next patch ($P_{len}$ values)
  • Long-horizon: append prediction to context and repeat (optionally drop oldest patch to keep window fixed)
  • KV caching: reuse cached keys/values for past patches; compute new Q/K/V only for the appended patch

Acknowledgements

We thank the authors of the following repositories for inspiration and code snippets:

Citation

If you use this work, please cite the paper ...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

patchfm-1.1.1.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

patchfm-1.1.1-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file patchfm-1.1.1.tar.gz.

File metadata

  • Download URL: patchfm-1.1.1.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.10

File hashes

Hashes for patchfm-1.1.1.tar.gz
Algorithm Hash digest
SHA256 cb764245453e696a65894d5fdd1fd5f495db7a66e24bc32aa38dfc93f0171052
MD5 4997f9bac15d870490f5f4d2048b2069
BLAKE2b-256 4fea4e4de77b3b8cb639f33f559bdb5323da0b4b792597ba33c8f18eb3af1e3f

See more details on using hashes here.

File details

Details for the file patchfm-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: patchfm-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.10

File hashes

Hashes for patchfm-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 30d62cc9ef37bb104ad27c77dcc6bfe73dbef0aabb263217aaf368e936a8bb41
MD5 88f0e5e2b328a1d756460c9cd43c3488
BLAKE2b-256 7e6bc74cbf8a351b4fd49a9acc468e86e1bdb7a5efaea2072a574003559dc81f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page