Skip to main content

a Foundation Model for Univariate Time Series Forecasting

Project description

A tutorial on how to build a Foundation Model for Univariate Time Series Forecasting

Huggingface Model Card

A transformer-based forecasting model for univariate time series. The approach mirrors Large Language Model (LLM) practices (next-token → next-patch) while remaining lightweight compared to a classic LLM and practical.

Highlights

  • Next-patch prediction objective (autoregressive, causal)
  • Patch-based representation of time series (tokens ↔ patches)
  • Causal masking self-attention with RoPE (relative positions)
  • RevIN (Reversible Instance Normalization) with causal statistics
  • SwiGLU feed-forward networks
  • Multi-quantile outputs (median + uncertainty bands)
  • Efficient rollout with KV caching

Installation

pip install patchfm

Quick Start

import torch
from patchfm import PatchFMConfig, Forecaster

# --- Instantiate model ---
config = PatchFMConfig()
model = Forecaster(config)

# --- Inference ---
forecast_horizon = 64
seq = torch.randn(1, 1024)  # (batch, time)
pred_median, pred_quantiles = model(seq, forecast_horizon=forecast_horizon, quantiles=[0.1, 0.5, 0.9]) # (batch, forecast_horizon), (batch, forecast_horizon, quantiles)

We provide an extended quick start example in notebooks/tutorial.ipynb. If you dont have suitable hardware you can run the the extended quick start example example also in Google Colab:

Open Quick Start In Colab

Method (TL;DR)

  • Patching: Split a context signal of length $w$ into $P_{num} = w / P_{len}$ patches of length $P_{len}$.
  • RevIN: Normalize patches using causal running mean/variance over past patches, and denormalize outputs to the original scale.
  • Architecture: Input residual MLP → stacked Transformer blocks (MHA + SwiGLU FFN, pre-norm, residual) → $|\mathcal{Q}|$ output heads mapping back to patch space.
  • Positional encoding: Rotary Position Embeddings (RoPE) applied to queries/keys.
  • Training: Multi-quantile (pinball) loss across positions, elements, and quantiles $\mathcal{Q}$.
  • Inference: Predict next patch; roll out autoregressively with KV caching for long horizons.

Problem Formulation

Given context patches $x_{p_1}, \ldots, x_{p_n}$, predict the next patch $x_{p_{i+1}}$ for each position $i$ using only past patches (causality). The model outputs quantiles ${\hat{x}{p{i+1}}^{(q)}: q \in \mathcal{Q}}$ with median (q=0.5) as the point forecast.

Loss: Multi-Quantile (Pinball)

For residual $u = x - \hat{x}^{(q)}$: $$\rho_q(u) = \begin{cases} q,u, & u \ge 0,\ (q-1),u, & u < 0. \end{cases}$$ Aggregate over positions, patch elements, and quantiles.

Architecture

  • Input MLP: $\mathbb{R}^{P_{len}} \to \mathbb{R}^{dim}$ residual 2-layer MLP (ReLU)
  • Multi-Head Attention: causal mask, RoPE; queries/keys/values per head
  • FFN: SwiGLU (SiLU-gated), pre-norm + residual
  • Output heads: |Q| linear maps $\mathbb{R}^{dim} \to \mathbb{R}^{P_{len}}$ (one per quantile)

Model Details

  • Patch size: 32
  • Max context: 32 patches (1024 steps)
  • Forecast horizon: 32 steps per forward pass
  • Quantiles $\mathcal{Q}$: {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}
  • Layers: 6
  • Attention heads: 64 (head dim 32)
  • Model dim: 2048
  • Parameters: ~300M

Inference

  • Single step: predict next patch ($P_{len}$ values)
  • Long-horizon: append prediction to context and repeat (optionally drop oldest patch to keep window fixed)
  • KV caching: reuse cached keys/values for past patches; compute new Q/K/V only for the appended patch

Acknowledgements

We thank the authors of the following repositories for inspiration and code snippets:

Citation

If you use this work, please cite the paper ...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

patchfm-1.1.7.tar.gz (11.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

patchfm-1.1.7-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file patchfm-1.1.7.tar.gz.

File metadata

  • Download URL: patchfm-1.1.7.tar.gz
  • Upload date:
  • Size: 11.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.10

File hashes

Hashes for patchfm-1.1.7.tar.gz
Algorithm Hash digest
SHA256 7036a6bbc297ae2245b17807c10117b6d1812180375bc59b25ccc68b30f18d34
MD5 92df70e21ef49e43fb250117d42c1f35
BLAKE2b-256 95ea6495b1abdf65ca4dc51e0cd1acb841018da0dd8a9bde7552c00918b51fc0

See more details on using hashes here.

File details

Details for the file patchfm-1.1.7-py3-none-any.whl.

File metadata

  • Download URL: patchfm-1.1.7-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.10

File hashes

Hashes for patchfm-1.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 9e35807e14b9c612d473fca91b3987902f32b5a99448adbd3282b45722a97112
MD5 68d3df2e023d70da7c9cfeaa9e9fd2b0
BLAKE2b-256 3822eb4257e9d43e7df03ca4f60fe2ffb95250b2eb3e8b0b64a5d262794ee59d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page