Skip to main content

Kernel for probabilistic forecasting with attentive sequence models.

Project description

BaseAttentive


A modular encoder-decoder architecture for sequence-to-sequence time series forecasting with layered attention mechanisms.

Overview

BaseAttentive is a modular encoder-decoder architecture designed to process three distinct types of inputs:

  • Static features — constant across time (e.g., geographical coordinates, site properties)
  • Dynamic past features — historical time series (e.g., sensor readings, observations)
  • Known future features — forecast-period exogenous variables (e.g., weather forecasts)

It combines these inputs using a configurable attention stack and can serve as a building block for models such as HALNet and PIHALNet.

Key Features

Architecture options

  • Hybrid mode: Multi-scale LSTM + Attention (objective="hybrid")
  • Transformer mode: Pure self-attention (objective="transformer")
  • Operational shortcuts: TFT-like (mode="tft"), PIHALNet-like (mode="pihal")
  • Declarative attention stack via attention_levels

Core components

  • Variable Selection Networks (VSN) for learnable feature weighting
  • Multi-scale LSTM for hierarchical temporal patterns (scales, multi_scale_agg)
  • Cross, hierarchical, and memory-augmented attention
  • Transformer encoder/decoder blocks
  • Quantile and probabilistic forecast heads

V2 system

  • BaseAttentiveSpec / BaseAttentiveComponentSpec for backend-neutral config
  • ComponentRegistry and ModelRegistry for pluggable components
  • BaseAttentiveV2Assembly resolver/assembler pattern
  • Multi-backend: TensorFlow (stable), JAX, PyTorch (experimental)

Runtime support

  • Keras 3 multi-backend implementation
  • make_fast_predict_fn for traced TF inference
  • Input validation utilities

Installation

pip install base-attentive

Backend extras

pip install "base-attentive[tensorflow]"   # TensorFlow backend (stable)
pip install "base-attentive[jax]"          # JAX backend (experimental)
pip install "base-attentive[torch]"        # PyTorch backend (experimental)
pip install "base-attentive[all-backends]" # All backends

From source

git clone https://github.com/earthai-tech/base-attentive.git
cd base-attentive
pip install -e ".[dev,tensorflow]"

Development Setup

If you use make (Linux, macOS, WSL, or Git Bash on Windows), the repository includes a Makefile with common development commands:

make install-tensorflow   # editable install with dev + TensorFlow extras
make test-fast            # quick local pytest pass
make lint                 # Ruff lint + format check
make format               # apply Ruff fixes and formatting
make build                # build wheel and sdist

Run make help to see the full command list.

Quick Start

import numpy as np
from base_attentive import BaseAttentive

# Create a model
model = BaseAttentive(
    static_input_dim=4,           # 4 static features
    dynamic_input_dim=8,          # 8 dynamic features in history
    future_input_dim=6,           # 6 known future features
    output_dim=2,                 # 2 target variables
    forecast_horizon=24,          # 24-step ahead forecast
    quantiles=[0.1, 0.5, 0.9],   # Uncertainty quantiles
    embed_dim=32,
    num_heads=8,
    dropout_rate=0.15,
)

# Prepare inputs
BATCH_SIZE = 32
x_static  = np.random.randn(BATCH_SIZE, 4).astype("float32")
x_dynamic = np.random.randn(BATCH_SIZE, 100, 8).astype("float32")  # 100 history steps
x_future  = np.random.randn(BATCH_SIZE, 24, 6).astype("float32")   # 24 forecast steps

# Make predictions
predictions = model([x_static, x_dynamic, x_future])
print(predictions.shape)  # (32, 24, 3, 2) — [batch, horizon, quantiles, outputs]

Architecture Configuration

Override defaults via architecture_config:

from base_attentive import BaseAttentive

model = BaseAttentive(
    static_input_dim=4,
    dynamic_input_dim=8,
    future_input_dim=6,
    output_dim=2,
    forecast_horizon=24,
    mode="tft",                          # TFT-like shortcut
    attention_levels=["cross", "hierarchical"],
    scales=[1, 2, 4],                    # Multi-scale LSTM strides
    multi_scale_agg="average",
    architecture_config={
        "encoder_type": "transformer",   # Pure attention encoder
        "feature_processing": "vsn",     # Variable selection networks
    },
)

Available architecture_config keys:

  • encoder_type: 'hybrid' (LSTM+Attention) or 'transformer' (pure attention)
  • feature_processing: 'vsn' (learnable selection) or 'dense' (standard layers)

Documentation

Full documentation: https://base-attentive.readthedocs.io

Papers & References

License

This project is licensed under the Apache License 2.0 — see LICENSE for details.

Citation

@software{baseattentive2026,
  author  = {Kouadio, L.},
  title   = {BaseAttentive: Modular Multi-Backend Encoder-Decoder Architecture for Probabilistic Time Series Forecasting},
  year    = {2026},
  version = {2.0.0},
  url     = {https://github.com/earthai-tech/base-attentive}
}

Contributing

Contributions are welcome! Please open an issue or submit a pull request. See the Contributing Guide for details.

Acknowledgments

  • Built on Keras 3 with TensorFlow-first support and experimental JAX/PyTorch paths
  • Inspired by recent time series forecasting research (TFT, PIHALNet, HALNet)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

base_attentive-2.0.0.tar.gz (192.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

base_attentive-2.0.0-py3-none-any.whl (160.1 kB view details)

Uploaded Python 3

File details

Details for the file base_attentive-2.0.0.tar.gz.

File metadata

  • Download URL: base_attentive-2.0.0.tar.gz
  • Upload date:
  • Size: 192.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for base_attentive-2.0.0.tar.gz
Algorithm Hash digest
SHA256 ce487734e3bad0880ea4a45c5a6558a409f94d26a8411fea1087e8b760df82a1
MD5 e62fb801ad91bef3c66d3e8c149d468c
BLAKE2b-256 6cac489d9508f1190176fa4630dc16b9dcb3f6f803fa0faa146d9db9d4d55b6e

See more details on using hashes here.

File details

Details for the file base_attentive-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: base_attentive-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 160.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for base_attentive-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fa4da7312d624ff3816b6736489fa53088eb4fd343371d38f0e8d6d586503f2d
MD5 ba085f60ba7a6cc7d96bbcda442ad8e7
BLAKE2b-256 374b8dedec0191a1d0c58058fa3678bd20fce2001dbfb02e73ab1663571eb2dd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page