Skip to main content

Kernel for probabilistic forecasting with attentive sequence models.

Project description

BaseAttentive


A modular encoder-decoder architecture for sequence-to-sequence time series forecasting with layered attention mechanisms.

Overview

BaseAttentive is a modular encoder-decoder architecture designed to process three distinct types of inputs:

  • Static features — constant across time (e.g., geographical coordinates, site properties)
  • Dynamic past features — historical time series (e.g., sensor readings, observations)
  • Known future features — forecast-period exogenous variables (e.g., weather forecasts)

It combines these inputs using a configurable attention stack and can serve as a building block for models such as HALNet and PIHALNet.

Key Features

Architecture options

  • Hybrid mode: Multi-scale LSTM + Attention (objective="hybrid")
  • Transformer mode: Pure self-attention (objective="transformer")
  • Operational shortcuts: TFT-like (mode="tft"), PIHALNet-like (mode="pihal")
  • Declarative attention stack via attention_levels

Core components

  • Variable Selection Networks (VSN) for learnable feature weighting
  • Multi-scale LSTM for hierarchical temporal patterns (scales, multi_scale_agg)
  • Cross, hierarchical, and memory-augmented attention
  • Transformer encoder/decoder blocks
  • Quantile and probabilistic forecast heads

V2 system

  • BaseAttentiveSpec / BaseAttentiveComponentSpec for backend-neutral config
  • ComponentRegistry and ModelRegistry for pluggable components
  • BaseAttentiveV2Assembly resolver/assembler pattern
  • Multi-backend: TensorFlow (stable), JAX, PyTorch (experimental)

Runtime support

  • Keras 3 multi-backend implementation
  • make_fast_predict_fn for traced TF inference
  • Input validation utilities

Installation

pip install base-attentive

Backend extras

pip install "base-attentive[tensorflow]"   # TensorFlow backend (stable)
pip install "base-attentive[jax]"          # JAX backend (experimental)
pip install "base-attentive[torch]"        # PyTorch backend (experimental)
pip install "base-attentive[all-backends]" # All backends

From source

git clone https://github.com/earthai-tech/base-attentive.git
cd base-attentive
pip install -e ".[dev,tensorflow]"

Development Setup

If you use make (Linux, macOS, WSL, or Git Bash on Windows), the repository includes a Makefile with common development commands:

make install-tensorflow   # editable install with dev + TensorFlow extras
make test-fast            # quick local pytest pass
make lint                 # Ruff lint + format check
make format               # apply Ruff fixes and formatting
make build                # build wheel and sdist

Run make help to see the full command list.

Quick Start

import numpy as np
from base_attentive import BaseAttentive

# Create a model
model = BaseAttentive(
    static_input_dim=4,           # 4 static features
    dynamic_input_dim=8,          # 8 dynamic features in history
    future_input_dim=6,           # 6 known future features
    output_dim=2,                 # 2 target variables
    forecast_horizon=24,          # 24-step ahead forecast
    quantiles=[0.1, 0.5, 0.9],   # Uncertainty quantiles
    embed_dim=32,
    num_heads=8,
    dropout_rate=0.15,
)

# Prepare inputs
BATCH_SIZE = 32
x_static  = np.random.randn(BATCH_SIZE, 4).astype("float32")
x_dynamic = np.random.randn(BATCH_SIZE, 100, 8).astype("float32")  # 100 history steps
x_future  = np.random.randn(BATCH_SIZE, 24, 6).astype("float32")   # 24 forecast steps

# Make predictions
predictions = model([x_static, x_dynamic, x_future])
print(predictions.shape)  # (32, 24, 3, 2) — [batch, horizon, quantiles, outputs]

Architecture Configuration

Override defaults via architecture_config:

from base_attentive import BaseAttentive

model = BaseAttentive(
    static_input_dim=4,
    dynamic_input_dim=8,
    future_input_dim=6,
    output_dim=2,
    forecast_horizon=24,
    mode="tft",                          # TFT-like shortcut
    attention_levels=["cross", "hierarchical"],
    scales=[1, 2, 4],                    # Multi-scale LSTM strides
    multi_scale_agg="average",
    architecture_config={
        "encoder_type": "transformer",   # Pure attention encoder
        "feature_processing": "vsn",     # Variable selection networks
    },
)

Available architecture_config keys:

  • encoder_type: 'hybrid' (LSTM+Attention) or 'transformer' (pure attention)
  • feature_processing: 'vsn' (learnable selection) or 'dense' (standard layers)

Documentation

Full documentation: https://base-attentive.readthedocs.io

Papers & References

License

This project is licensed under the Apache License 2.0 — see LICENSE for details.

Citation

@software{baseattentive2026,
  author  = {Kouadio, L.},
  title   = {BaseAttentive: Modular Multi-Backend Encoder-Decoder Architecture for Probabilistic Time Series Forecasting},
  year    = {2026},
  version = {2.0.0rc1},
  url     = {https://github.com/earthai-tech/base-attentive}
}

Contributing

Contributions are welcome! Please open an issue or submit a pull request. See the Contributing Guide for details.

Acknowledgments

  • Built on Keras 3 with TensorFlow-first support and experimental JAX/PyTorch paths
  • Inspired by recent time series forecasting research (TFT, PIHALNet, HALNet)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

base_attentive-2.0.0rc1.tar.gz (158.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

base_attentive-2.0.0rc1-py3-none-any.whl (150.3 kB view details)

Uploaded Python 3

File details

Details for the file base_attentive-2.0.0rc1.tar.gz.

File metadata

  • Download URL: base_attentive-2.0.0rc1.tar.gz
  • Upload date:
  • Size: 158.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for base_attentive-2.0.0rc1.tar.gz
Algorithm Hash digest
SHA256 2ca9a15e9c4797b876ff734d0e882d81abafb2ed7925fba8082005d9b81777e7
MD5 fa8c920b41a04ee507404f72cbf4c4df
BLAKE2b-256 85717dffe81edd91c55141016046beb743c63df30cd77cbc7412da5405a7569e

See more details on using hashes here.

File details

Details for the file base_attentive-2.0.0rc1-py3-none-any.whl.

File metadata

File hashes

Hashes for base_attentive-2.0.0rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 cdde43e39afb1ff31236c9ded29b209b735f1d5204a4a315c0fb4776ddc59766
MD5 c04dc7af0695e1fb25607be32d4ab2b5
BLAKE2b-256 fc3d903cf2ebeb966ee2a8de79eab375374e9d427ddc5804675a6e880e16909c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page