Skip to main content

Kernel for probabilistic forecasting with attentive sequence models.

Project description

BaseAttentive


A modular encoder-decoder architecture for sequence-to-sequence time series forecasting with layered attention mechanisms.

Overview

BaseAttentive is a modular encoder-decoder architecture designed to process three distinct types of inputs:

  • Static features — constant across time (e.g., geographical coordinates, site properties)
  • Dynamic past features — historical time series (e.g., sensor readings, observations)
  • Known future features — forecast-period exogenous variables (e.g., weather forecasts)

It combines these inputs using a configurable attention stack and can serve as a building block for models such as HALNet and PIHALNet.

Key Features

Architecture options

  • Hybrid mode: Multi-scale LSTM + Attention (objective="hybrid")
  • Transformer mode: Pure self-attention (objective="transformer")
  • Operational shortcuts: TFT-like (mode="tft"), PIHALNet-like (mode="pihal")
  • Declarative attention stack via attention_levels

Core components

  • Variable Selection Networks (VSN) for learnable feature weighting
  • Multi-scale LSTM for hierarchical temporal patterns (scales, multi_scale_agg)
  • Cross, hierarchical, and memory-augmented attention
  • Transformer encoder/decoder blocks
  • Quantile and probabilistic forecast heads

V2 system

  • BaseAttentiveSpec / BaseAttentiveComponentSpec for backend-neutral config
  • ComponentRegistry and ModelRegistry for pluggable components
  • BaseAttentiveV2Assembly resolver/assembler pattern
  • Multi-backend: TensorFlow (stable), JAX, PyTorch (experimental)

Runtime support

  • Keras 3 multi-backend implementation
  • make_fast_predict_fn for traced TF inference
  • Input validation utilities

Installation

pip install base-attentive

Backend extras

pip install "base-attentive[tensorflow]"   # TensorFlow backend (stable)
pip install "base-attentive[jax]"          # JAX backend (experimental)
pip install "base-attentive[torch]"        # PyTorch backend (experimental)
pip install "base-attentive[all-backends]" # All backends

From source

git clone https://github.com/earthai-tech/base-attentive.git
cd base-attentive
pip install -e ".[dev,tensorflow]"

Development Setup

If you use make (Linux, macOS, WSL, or Git Bash on Windows), the repository includes a Makefile with common development commands:

make install-tensorflow   # editable install with dev + TensorFlow extras
make test-fast            # quick local pytest pass
make lint                 # Ruff lint + format check
make format               # apply Ruff fixes and formatting
make build                # build wheel and sdist

Run make help to see the full command list.

Quick Start

import numpy as np
from base_attentive import BaseAttentive

# Create a model
model = BaseAttentive(
    static_input_dim=4,           # 4 static features
    dynamic_input_dim=8,          # 8 dynamic features in history
    future_input_dim=6,           # 6 known future features
    output_dim=2,                 # 2 target variables
    forecast_horizon=24,          # 24-step ahead forecast
    quantiles=[0.1, 0.5, 0.9],   # Uncertainty quantiles
    embed_dim=32,
    num_heads=8,
    dropout_rate=0.15,
)

# Prepare inputs
BATCH_SIZE = 32
x_static  = np.random.randn(BATCH_SIZE, 4).astype("float32")
x_dynamic = np.random.randn(BATCH_SIZE, 100, 8).astype("float32")  # 100 history steps
x_future  = np.random.randn(BATCH_SIZE, 24, 6).astype("float32")   # 24 forecast steps

# Make predictions
predictions = model([x_static, x_dynamic, x_future])
print(predictions.shape)  # (32, 24, 3, 2) — [batch, horizon, quantiles, outputs]

Architecture Configuration

Override defaults via architecture_config:

from base_attentive import BaseAttentive

model = BaseAttentive(
    static_input_dim=4,
    dynamic_input_dim=8,
    future_input_dim=6,
    output_dim=2,
    forecast_horizon=24,
    mode="tft",                          # TFT-like shortcut
    attention_levels=["cross", "hierarchical"],
    scales=[1, 2, 4],                    # Multi-scale LSTM strides
    multi_scale_agg="average",
    architecture_config={
        "encoder_type": "transformer",   # Pure attention encoder
        "feature_processing": "vsn",     # Variable selection networks
    },
)

Available architecture_config keys:

  • encoder_type: 'hybrid' (LSTM+Attention) or 'transformer' (pure attention)
  • feature_processing: 'vsn' (learnable selection) or 'dense' (standard layers)

Documentation

Full documentation: https://base-attentive.readthedocs.io

Papers & References

License

This project is licensed under the Apache License 2.0 — see LICENSE for details.

Citation

@software{baseattentive2026,
  author  = {Kouadio, L.},
  title   = {BaseAttentive: Modular Multi-Backend Encoder-Decoder Architecture for Probabilistic Time Series Forecasting},
  year    = {2026},
  version = {2.0.1},
  url     = {https://github.com/earthai-tech/base-attentive}
}

Contributing

Contributions are welcome! Please open an issue or submit a pull request. See the Contributing Guide for details.

Acknowledgments

  • Built on Keras 3 with TensorFlow-first support and experimental JAX/PyTorch paths
  • Inspired by recent time series forecasting research (TFT, PIHALNet, HALNet)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

base_attentive-2.1.0.tar.gz (208.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

base_attentive-2.1.0-py3-none-any.whl (191.9 kB view details)

Uploaded Python 3

File details

Details for the file base_attentive-2.1.0.tar.gz.

File metadata

  • Download URL: base_attentive-2.1.0.tar.gz
  • Upload date:
  • Size: 208.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for base_attentive-2.1.0.tar.gz
Algorithm Hash digest
SHA256 c046bdb440bb9bdd654621794bfc7d6c7f8ec3c17901e237648fcc0ece5a4a04
MD5 5eb8854a85d0d62a3a8f290f0b92236e
BLAKE2b-256 fa9e777cb2946c8182f12dbd3a9e742909520b783383a80eb13cf2c6a2891323

See more details on using hashes here.

File details

Details for the file base_attentive-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: base_attentive-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 191.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for base_attentive-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3b6c14ef45c88cb0606fe19afa6ea130f397123148666e06aeca904b30f50931
MD5 7335dc66cd91412ae63c1992c8578bcd
BLAKE2b-256 5621a5d7bb4ecc87713d79ce2dc0931e273d2b1aca18bb352549b6df38fe2bbf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page