Skip to main content

Automated time series forecasting with Chronos-2 foundation model (zero-shot, no training required), per-series model selection, and interpretability.

Project description

AutoTSForecast

Automated Time Series Forecasting with Per-Series Model Selection

Python 3.8+ License: MIT PyPI Tests

AutoTSForecast automatically finds the best forecasting model for each of your time series. No more guessing whether Prophet, ARIMA, XGBoost, or Chronos-2 foundation model works best โ€” let the algorithm decide. New: Zero-shot forecasting with Chronos-2 โ€” no training required, just pass your data and get state-of-the-art predictions!

๐Ÿš€ Key Features

Feature Description Benefit
Chronos-2 Foundation Model ๐Ÿ†• Zero-shot forecasting with pre-trained models (9M-710M params) No training needed โ€” just pass your data!
Per-Series Model Selection Automatically pick the best model for each series Different series, different patterns โ†’ optimal accuracy
Per-Series Covariates ๐Ÿ†• Pass different features to different series Products driven by different factors get custom features
Prediction Intervals ๐Ÿ†• Conformal prediction with coverage guarantees Quantify uncertainty without assumptions
Calendar Features ๐Ÿ†• Auto-extract day-of-week, month, holidays Handle seasonality automatically
Hierarchical Reconciliation Ensure forecasts add up (total = sum of parts) Coherent forecasts across organizational levels
Parallel Processing ๐Ÿ†• Fit many series simultaneously Scale to thousands of series
Interpretability Sensitivity analysis & SHAP Understand what drives your forecasts

โœจ What's New in v0.4.0

  • ๐Ÿ““ Rewritten tutorial โ€” examples/autotsforecast_tutorial.ipynb redesigned with a DGP that guarantees measurable improvements for per-series covariates and hierarchical reconciliation
  • ๐Ÿ“ฆ Portable notebook โ€” Added pip install autotsforecast[ml] installation cell so the notebook runs anywhere without this repo
  • ๐Ÿ“š Docs overhaul โ€” All documentation files updated: corrected model tables, covariate support flags, Chronos-2 details
  • ๐Ÿ› Bug fixes โ€” get_summary() / print_summary() now work correctly in per-series mode
  • ๐Ÿ› Bug fixes โ€” BacktestValidator now clones the model per fold (no shared-state mutation)
  • ๐Ÿ› Bug fixes โ€” VARForecaster raises a clear error when fewer than 2 series are provided
  • โš™๏ธ Internals โ€” Version sourced from package metadata (single source of truth)
  • ๐Ÿ”ง CI/CD โ€” GitHub Actions workflow runs the full test suite on every push/PR

โœจ What's New in v0.3.8+

  • ๐Ÿš€ Chronos-2 Foundation Model โ€” Zero-shot forecasting with state-of-the-art pre-trained models (no training needed!)
  • ๐ŸŽฏ Per-Series Covariates โ€” Pass different features to different series via X={series: df}
  • ๐Ÿ“Š Prediction Intervals โ€” Conformal prediction for uncertainty quantification
  • ๐Ÿ“… Calendar Features โ€” Automatic time-based feature extraction with cyclical encoding
  • ๐Ÿ–ผ๏ธ Better Visualization โ€” Static (matplotlib) and interactive (Plotly) forecast plots
  • โšก Parallel Processing โ€” Speed up multi-series forecasting with joblib
  • ๐Ÿ“ˆ Progress Tracking โ€” Rich progress bars for long-running operations

Installation

๐Ÿš€ Recommended: Install Everything

pip install "autotsforecast[all]"

This installs all 10 models plus visualization, interpretability, and new features.

Basic Install (Core Models Only)

pip install autotsforecast

This gives you 6 models out of the box:

Model Description
ARIMAForecaster Classical ARIMA
ETSForecaster Exponential smoothing
LinearForecaster Linear regression โ€” requires covariates X
MovingAverageForecaster Simple baseline
RandomForestForecaster ML with covariates โœ“
VARForecaster Vector autoregression โ€” requires โ‰ฅ 2 series

Install Specific Optional Models

Some models require additional dependencies:

# Add XGBoost (gradient boosting with covariates)
pip install "autotsforecast[ml]"

# Add Prophet (Facebook's forecasting library)
pip install "autotsforecast[prophet]"

# Add LSTM (deep learning)
pip install "autotsforecast[neural]"

# Add Chronos-2 (foundation model - state-of-the-art zero-shot forecasting)
pip install "autotsforecast[chronos]"

# Add SHAP (interpretability)
pip install "autotsforecast[interpret]"

# Add visualization tools (Plotly, progress bars)
pip install "autotsforecast[viz]"

Model Availability Summary

Model Basic Install Extra Required
ARIMA, ETS, Linear*, MovingAverage, RandomForest, VAR โœ… โ€”

* LinearForecaster requires covariates X to be passed (it is not included in get_default_candidate_models()). | XGBoostForecaster | โŒ | pip install "autotsforecast[ml]" | | ProphetForecaster | โŒ | pip install "autotsforecast[prophet]" | | LSTMForecaster | โŒ | pip install "autotsforecast[neural]" | | Chronos2Forecaster | โŒ | pip install "autotsforecast[chronos]" | | SHAP Analysis | โŒ | pip install "autotsforecast[interpret]" | | Interactive Plots | โŒ | pip install "autotsforecast[viz]" |

Quick Start

1. AutoForecaster โ€” Let the Algorithm Choose

from autotsforecast import AutoForecaster
from autotsforecast.models.base import MovingAverageForecaster
from autotsforecast.models.external import ARIMAForecaster, ProphetForecaster, RandomForestForecaster, Chronos2Forecaster

# Your time series data (pandas DataFrame)
# y = pd.DataFrame({'series_a': [...], 'series_b': [...]})

# Define candidate models (including Chronos-2 foundation model)
candidates = [
    ARIMAForecaster(horizon=14),
    ProphetForecaster(horizon=14),
    RandomForestForecaster(horizon=14, n_lags=7),
    MovingAverageForecaster(horizon=14, window=7),
    Chronos2Forecaster(horizon=14, model_name='autogluon/chronos-2-small'),  # Zero-shot foundation model
]

# AutoForecaster picks the best model across all series (default)
auto = AutoForecaster(candidate_models=candidates, metric='rmse')
auto.fit(y_train)
forecasts = auto.forecast()

# See which model was selected
print(auto.best_model_name_)  # e.g., 'Chronos2Forecaster'

# OR: Pick the best model for EACH series separately
auto = AutoForecaster(candidate_models=candidates, metric='rmse', per_series_models=True)
auto.fit(y_train)
forecasts = auto.forecast()

# See which models were selected per series
print(auto.best_model_names_)  # e.g., {'series_a': 'Chronos2Forecaster', 'series_b': 'ARIMAForecaster'}

2. Using Covariates (External Features)

from autotsforecast.models.external import XGBoostForecaster

# X contains external features (temperature, promotions, etc.)
model = XGBoostForecaster(horizon=14, n_lags=7)
model.fit(y_train, X=X_train)
forecasts = model.predict(X=X_test)

Models supporting covariates: Prophet, XGBoost, RandomForest, Linear

2.1 Calendar Features

Automatic time-based feature extraction:

from autotsforecast.features.calendar import CalendarFeatures

# Auto-detect features with cyclical encoding
cal = CalendarFeatures(cyclical_encoding=True)
features = cal.fit_transform(y_train)

# Generate future features for forecasting
future_features = cal.transform_future(horizon=30)

2.2 Per-Series Covariates โ€” Different Features for Each Series

Use Case: When different time series are driven by different external factors.

from autotsforecast import AutoForecaster
from autotsforecast.models.base import MovingAverageForecaster
from autotsforecast.models.external import RandomForestForecaster, XGBoostForecaster

# Example: Forecasting sales for different products
# Product A: Summer product (driven by weather and advertising)
X_product_a = pd.DataFrame({
    'temperature': [...],      # Weather matters for Product A
    'advertising_spend': [...] # Marketing campaigns
}, index=dates)

# Product B: Everyday product (driven by pricing and promotions)
X_product_b = pd.DataFrame({
    'competitor_price': [...],  # Price competition matters for Product B
    'promotion_active': [...]   # Promotional events
}, index=dates)

# Create dictionary mapping each series to its covariates
X_train_dict = {
    'product_a_sales': X_product_a_train,
    'product_b_sales': X_product_b_train
}

X_test_dict = {
    'product_a_sales': X_product_a_test,
    'product_b_sales': X_product_b_test
}

# Define candidate models (all support covariates X)
candidates = [
    RandomForestForecaster(horizon=14, n_lags=7),
    XGBoostForecaster(horizon=14, n_lags=7),
    MovingAverageForecaster(horizon=14, window=7),  # covariate-free baseline
]

# AutoForecaster with per-series model selection
auto = AutoForecaster(
    candidate_models=candidates,
    per_series_models=True,  # Select best model for each series
    metric='rmse'
)

# Fit: Each series uses its own covariates
auto.fit(y_train, X=X_train_dict)

# Forecast: Provide future covariates for each series
forecasts = auto.forecast(X=X_test_dict)

# See which model was selected for each series
print(auto.best_model_names_)
# Output: {'product_a_sales': 'RandomForestForecaster', 
#          'product_b_sales': 'XGBoostForecaster'}

Key Benefits:

  • โœ… Each series uses only relevant features (reduces noise)
  • โœ… Better accuracy through targeted feature engineering
  • โœ… Handle heterogeneous products with different drivers
  • โœ… Scalable to large portfolios with diverse characteristics
  • โœ… Backward compatible: still works with single DataFrame for all series

3. Hierarchical Reconciliation

Ensure forecasts add up correctly (e.g., total = region_a + region_b):

from autotsforecast.hierarchical.reconciliation import HierarchicalReconciler

hierarchy = {'total': ['region_a', 'region_b']}
reconciler = HierarchicalReconciler(forecasts=base_forecasts, hierarchy=hierarchy)
reconciler.reconcile(method='ols')
coherent_forecasts = reconciler.reconciled_forecasts

4. Backtesting (Cross-Validation)

from autotsforecast.backtesting.validator import BacktestValidator

validator = BacktestValidator(model=my_model, n_splits=5, test_size=14)
validator.run(y_train, X=X_train)

# Get results
results = validator.get_fold_results()  # RMSE per fold
print(f"Average RMSE: {results['rmse'].mean():.2f}")

5. Interpretability (Feature Importance)

from autotsforecast.interpretability.drivers import DriverAnalyzer

analyzer = DriverAnalyzer(model=fitted_model, feature_names=['temperature', 'promotion'])
importance = analyzer.calculate_feature_importance(X_test, y_test, method='sensitivity')

6. Prediction Intervals

Generate prediction intervals with conformal prediction:

from autotsforecast.uncertainty.intervals import PredictionIntervals

# After fitting a model
pi = PredictionIntervals(method='conformal', coverage=[0.80, 0.95])
pi.fit(model, y_train)
intervals = pi.predict(forecasts)

# Access intervals
print(intervals['lower_95'], intervals['upper_95'])

7. Chronos-2 Foundation Model (Zero-Shot Forecasting)

State-of-the-art pretrained model - no training needed!

from autotsforecast.models.external import Chronos2Forecaster

# Initialize with default model (120M params, best accuracy)
model = Chronos2Forecaster(
    horizon=30,
    model_name="amazon/chronos-2"  # or "autogluon/chronos-2-small" for faster inference
)

# Fit (just stores context, no training!)
model.fit(y_train)

# Generate point forecasts (median)
forecasts = model.predict()

# Generate probabilistic forecasts with uncertainty quantification
quantile_forecasts = model.predict_quantiles(quantile_levels=[0.1, 0.5, 0.9])
# Returns: value_q10, value_q50, value_q90 columns

Available Model Sizes:

  • amazon/chronos-2 - 120M params (best accuracy)
  • autogluon/chronos-2-small - 28M params (balanced, tested: 0.63% MAPE)
  • amazon/chronos-bolt-tiny - 9M params (ultra fast)
  • amazon/chronos-bolt-small - 48M params (balanced speed/accuracy)
  • amazon/chronos-bolt-base - 205M params (high accuracy + fast)

Why Chronos-2?

  • โœ… Zero-shot: No training required
  • โœ… State-of-the-art accuracy on multiple benchmarks
  • โœ… Built-in uncertainty quantification
  • โœ… Multiple model sizes for different use cases

8. Visualization

Create publication-ready plots:

from autotsforecast.visualization.plots import plot_forecast, plot_forecast_interactive

# Static matplotlib plot
fig = plot_forecast(y_train, y_test, forecast, lower=lower_95, upper=upper_95)

# Interactive Plotly plot
fig = plot_forecast_interactive(y_train, y_test, forecast)
fig.show()

9. Parallel Processing

Speed up multi-series forecasting:

from autotsforecast.utils.parallel import ParallelForecaster, parallel_map

# Create parallel forecaster
pf = ParallelForecaster(n_jobs=4)

# Fit each series in parallel
fitted_models = pf.parallel_series_fit(
    model_factory=lambda: RandomForestForecaster(horizon=14),
    y=y_train,
    X=X_train
)

Requirements

  • Python โ‰ฅ 3.8
  • Core: numpy, pandas, scikit-learn, statsmodels, scipy, joblib

License

MIT License

Contributing

Contributions welcome! Visit the GitHub repository to get started.

@software{autotsforecast2026,
  title={AutoTSForecast: Automated Time Series Forecasting},
  author={Weibin Xu},
  year={2026},
  url={https://github.com/weibinxu86/autotsforecast}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autotsforecast-0.4.0.tar.gz (89.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autotsforecast-0.4.0-py3-none-any.whl (77.6 kB view details)

Uploaded Python 3

File details

Details for the file autotsforecast-0.4.0.tar.gz.

File metadata

  • Download URL: autotsforecast-0.4.0.tar.gz
  • Upload date:
  • Size: 89.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for autotsforecast-0.4.0.tar.gz
Algorithm Hash digest
SHA256 4c41310bbb8e179948ed0ba25052ac92167bc09193569fc8450168da17180cd1
MD5 bba418877e5ebce6feab283df9654d51
BLAKE2b-256 ab992d44e44720eaede999705643ba756604fecdb47bdfe4d62ba279f990077c

See more details on using hashes here.

File details

Details for the file autotsforecast-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: autotsforecast-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 77.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for autotsforecast-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d96229b51274e9b8b9c591ec42da67c1a89c379866df1b19fc5b499df4750209
MD5 7a4a82f946a1cff99267e098207db158
BLAKE2b-256 7719394198c052f8487375039497c0c8b0ab2ff9820741eec4e333ef71bed678

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page