Skip to main content

Conditional Feature Importance for Multi-Time Series Forecasting

Project description

xeries

CI PyPI version Python versions License: MIT

Time Series eXplainability (XAI) for Forecasting

A comprehensive Python library for explainability and interpretability in multi-time series forecasting. xeries provides multiple explanation methods—including conditional permutation importance, SHAP, feature dropping, and causal analysis—with a unified API and interactive dashboard for understanding forecast decisions.

Why xeries?

Explaining and interpreting multi-time series forecasts is challenging:

  1. Standard methods fail on series-dependent features: Permutation importance and SHAP can create invalid data by pairing a lag from one series with a target from another.

  2. No unified explainability framework: Most libraries offer a single method (e.g., SHAP only) with limited support for time-series-specific analysis.

  3. Lack of temporal and comparative insights: Understanding performance across time windows, detecting significance, and comparing methods are difficult without custom tools.

xeries addresses these gaps with:

  • Conditional explanations that respect data structure
  • Multiple methods so you can triangulate insights
  • Time-aware analytics for temporal patterns
  • Unified API for seamless method switching
  • Interactive dashboard for exploratory analysis

Features

📊 Explainability Methods

  • Conditional Permutation Importance (cs-PFI): Permute features only within meaningful subgroups
    • Auto-discover groups using decision trees
    • Define custom groups based on domain knowledge
  • Conditional SHAP: Compute SHAP values with series-specific background data
  • SHAP-IQ: Analyze feature interactions and higher-order effects
  • Feature Dropping: Measure importance by removing features
  • Causal Feature Importance: Causal inference for treatment effects (DoWhy + EconML integration)

🛠️ Advanced Analytics

  • Temporal Windowed Analysis: Importance decomposed across time windows
  • Statistical Significance Testing: Bootstrap confidence intervals and hypothesis tests
  • Method Comparison: Side-by-side results from multiple explanation methods
  • Error Analysis: Per-series and per-window error metrics and attribution
  • Feature Interaction Analysis: Understand how features work together

📡 Framework Adapters

  • scikit-learn: Direct support for sklearn estimators
  • skforecast: Seamless integration with multi-series forecasters (0.21+)
  • Darts (PyTorch): Support for Darts neural network forecasters
  • Custom Models: Wrap any forecaster with the BaseAdapter

📈 Visualization & Reporting

  • Interactive Dashboard: Unified interface for all explainability components
  • Publication-Ready Plots:
    • Feature importance bar charts
    • Temporal heatmaps
    • Method comparison visualizations
    • Interaction plots
  • HTML Report Generation: Auto-generate dashboards with Jinja2 templates
  • Jupyter Integration: Works seamlessly in notebooks

Installation

pip install xeries

With UV:

uv add xeries

For skforecast integration:

pip install xeries[skforecast]

Supported Explanation Methods

Method Type Use Case Features
Conditional Permutation Importance Model-Agnostic Default choice; fast & interpretable Auto/manual grouping, windowed analysis
Conditional SHAP Additive Local & global explanations Series-aware backgrounds, force plots
SHAP-IQ Interaction Feature interactions Shapley interaction values, comparative
Feature Dropping Model-Agnostic Complementary to importance Dependency analysis, isolation effects
Causal Feature Importance Causal Treatment effects DoWhy pipelines, EconML estimators

Quick Start

Conditional Permutation Importance (Default)

from sklearn.ensemble import RandomForestRegressor
from skforecast.recursive import ForecasterRecursiveMultiSeries

from xeries import ConditionalPermutationImportance
from xeries.adapters.skforecast import from_skforecast

# Train your multi-series forecaster (skforecast 0.21+)
forecaster = ForecasterRecursiveMultiSeries(
    estimator=RandomForestRegressor(n_estimators=100, random_state=42),
    lags=24,
)
forecaster.fit(series=your_data)

# Same `series` as fit() is required for create_train_X_y (pass here or to get_training_data)
adapter = from_skforecast(forecaster, series=your_data)
X, y = adapter.get_training_data()

# Compute conditional importance (automatic tree-based cs-PFI)
explainer = ConditionalPermutationImportance(
    model=adapter,
    metric='mse',
    strategy='auto',
    n_repeats=5,
    random_state=42,
)

result = explainer.explain(X, y, features=['lag_1', 'lag_2', 'lag_3'])
print(result.to_dataframe())

Using Manual Groups

from xeries import ManualPartitioner, ConditionalPermutationImportance

# Domain groups: with skforecast 0.21+ wide data, series are ordinal-encoded in X.
# Map integers 0,1,... in the same order as forecaster.series_names_in_ (see adapter.forecaster).
mapping = {
    0: 'urban',
    1: 'suburban',
    2: 'urban',
}

partitioner = ManualPartitioner(mapping, series_col=adapter.get_series_column())

explainer = ConditionalPermutationImportance(
    model=adapter,
    metric='mse',
    strategy='manual',
    partitioner=partitioner,
)

result = explainer.explain(X, y)

Conditional SHAP

from xeries import ConditionalSHAP

explainer = ConditionalSHAP(
    model=adapter,
    background_data=X,
    # skforecast 0.21+: use adapter.get_series_column() (often "_level_skforecast")
    series_col=adapter.get_series_column(),
    n_background_samples=100,
)

# Explain instances with series-specific backgrounds
shap_result = explainer.explain(X.iloc[:10])

# Global importance
global_importance = explainer.global_importance(X, n_samples=100)

Visualization

from xeries.visualization import plot_importance_bar, plot_importance_heatmap

# Bar chart
fig, ax = plot_importance_bar(result, max_features=10)

# Heatmap comparing multiple results
results = {'Auto': result_auto, 'Manual': result_manual}
fig, ax = plot_importance_heatmap(results)

Causal Feature Importance

from xeries import CausalFeatureImportance

# Analyze causal treatment effects with DoWhy backend
explainer = CausalFeatureImportance(
    model=adapter,
    treatment_features=['lag_1', 'lag_24'],  # Which features to treat
)

result = explainer.explain(X, y)
print(result.estimates)  # Causal effect estimates
print(result.refutations)  # Robustness checks

Compare Multiple Methods

from xeries import (
    ConditionalPermutationImportance,
    ConditionalSHAP,
    ConditionalDropImportance,
)
from xeries.analysis import compare_rankings

# Compute explanations with different methods
pfi_result = ConditionalPermutationImportance(...).explain(X, y)
shap_result = ConditionalSHAP(...).explain(X)
drop_result = ConditionalDropImportance(...).explain(X, y)

# Compare results
comparison = compare_rankings(
    {'PFI': pfi_result, 'SHAP': shap_result, 'Dropping': drop_result}
)

Unified Dashboard

from xeries import Dashboard

# Combine all explainability results into one interactive dashboard
dashboard = Dashboard(forecaster=adapter)
dashboard.add_permutation_importance(pfi_result)
dashboard.add_causal_importance(causal_result)
dashboard.add_error_analysis(X, y, predictions)

# Generate HTML report
dashboard.generate_report('forecast_analysis.html')

# Or display in Jupyter
dashboard.show()

Documentation

Full documentation is available at https://thec0dewriter.github.io/xeries

Development

Clone the repository:

git clone https://github.com/thec0dewriter/xeries.git
cd xeries

Install with development dependencies:

uv sync --dev

Run tests:

uv run pytest

Run linting:

uv run ruff check src tests
uv run ruff format src tests

Type checking:

uv run ty check src

Build documentation:

uv sync --group docs
uv run mkdocs serve

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you use this library in your research, please cite:

@software{xeries,
  title = {xeries: Time Series eXplainability for Forecasting},
  author = {thec0dewriter},
  year = {2026},
  url = {https://github.com/thec0dewriter/xeries},
}

Related Publications

This library implements techniques from:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xeries-0.1.0.tar.gz (95.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xeries-0.1.0-py3-none-any.whl (26.6 kB view details)

Uploaded Python 3

File details

Details for the file xeries-0.1.0.tar.gz.

File metadata

  • Download URL: xeries-0.1.0.tar.gz
  • Upload date:
  • Size: 95.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for xeries-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c03c6b3695b641af07309dae611b42a99209239edc89f145a1a41448e508f1d6
MD5 207ef7843ae91cc4457e427f6a0e2362
BLAKE2b-256 41f9a3af69bbc19029caa29454cd1be4be228498bbc0d567eafb8852894250ad

See more details on using hashes here.

Provenance

The following attestation bundles were made for xeries-0.1.0.tar.gz:

Publisher: release.yml on thec0dewriter/xeries

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file xeries-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: xeries-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 26.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for xeries-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9e0e721b18e4d7e9c2e40954d9d3f86f78cfd60681aec5775e1f6609159847f0
MD5 599449800624eb3356b0a46862e163a4
BLAKE2b-256 ceba80e872f330670661c7bedcb52c086f792affeedad4839c2c6a9d041e8ac3

See more details on using hashes here.

Provenance

The following attestation bundles were made for xeries-0.1.0-py3-none-any.whl:

Publisher: release.yml on thec0dewriter/xeries

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page