Skip to main content

Electric Barometer: DataFrame-based evaluation utilities for CWSL and related metrics.

Project description

Electric Barometer · Evaluation (eb-evaluation)

CI License: BSD-3-Clause Python Versions PyPI

Evaluation and model selection utilities for applying Electric Barometer metrics across entities, groups, and operational contexts.


Overview

eb-evaluation provides the evaluation and model selection layer of the Electric Barometer ecosystem. It applies metric primitives to forecasts and observations across entities, groups, and hierarchical structures, enabling consistent assessment of forecasting performance in operational settings.

The package focuses on DataFrame-first evaluation workflows, including tolerance-based scoring, cost-sensitive comparison, and readiness-oriented adjustment logic. It does not define feature construction or model interfaces; instead, it consumes standardized inputs from upstream layers and produces evaluation outputs that can be used for model selection, reporting, and decision support.


Role in the Electric Barometer Ecosystem

eb-evaluation defines the evaluation and model selection layer used throughout the Electric Barometer ecosystem. It is responsible for applying metric primitives to forecasts and observations across entities, groups, and hierarchies, enabling consistent comparison of forecasting performance in operational contexts.

This package focuses exclusively on evaluation logic, aggregation semantics, and selection workflows. It does not perform feature construction, model training, or metric definition. Those responsibilities are handled by adjacent layers that generate inputs, adapt model interfaces, or define metric behavior.

By separating evaluation orchestration from metric semantics and model implementation details, eb-evaluation provides a stable, DataFrame-first foundation for decision-aligned model comparison and readiness assessment across heterogeneous forecasting pipelines.


Installation

eb-evaluation is distributed as a standard Python package.

pip install eb-evaluation

The package supports Python 3.10 and later.


Core Concepts

  • DataFrame-first evaluation — Evaluation logic operates directly on tabular forecast and observation data, enabling transparent aggregation, grouping, and comparison across entities and hierarchies.
  • Cost- and tolerance-aware scoring — Forecast performance is assessed using metrics that reflect asymmetric cost, acceptable deviation thresholds, and operational risk rather than purely symmetric statistical error.
  • Hierarchical and panel semantics — Evaluation respects entity boundaries, grouping structure, and temporal alignment, ensuring correctness in multi-level forecasting environments.
  • Model comparability — Forecasts produced by heterogeneous models can be evaluated and compared using a consistent set of metrics and aggregation rules.
  • Readiness-oriented selection — Model selection emphasizes execution feasibility and operational adequacy, not just aggregate accuracy, supporting decision-aligned forecasting workflows.

Minimal Example

The example below shows how forecasts and observations can be evaluated and compared across entities using Electric Barometer metrics in a DataFrame-first workflow.

import pandas as pd
from eb_evaluation.dataframe.compare import compare_models

# Example evaluation data
df = pd.DataFrame({
    "entity_id": ["A", "A", "B", "B"],
    "date": pd.to_datetime(["2024-01-01", "2024-01-02"] * 2),
    "actual": [10, 12, 7, 9],
    "model_a": [9, 11, 8, 10],
    "model_b": [11, 13, 6, 8],
})

# Compare models using a common evaluation contract
results = compare_models(
    df,
    actual_col="actual",
    prediction_cols=["model_a", "model_b"],
    entity_col="entity_id",
    time_col="date",
)

print(results)

License

BSD 3-Clause License.
© 2025 Kyle Corrie.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eb_evaluation-0.2.2.tar.gz (44.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eb_evaluation-0.2.2-py3-none-any.whl (57.3 kB view details)

Uploaded Python 3

File details

Details for the file eb_evaluation-0.2.2.tar.gz.

File metadata

  • Download URL: eb_evaluation-0.2.2.tar.gz
  • Upload date:
  • Size: 44.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for eb_evaluation-0.2.2.tar.gz
Algorithm Hash digest
SHA256 1302c78c3c5e70736207c600a45a91bbbd4540219792e8c7b9bf90f39088e025
MD5 4d6f795767522846c0bed747224da80c
BLAKE2b-256 6a3bb266891bcff35750a98fe545e1bc38ea1a6dcd4d6a0c1cb958c04ccb0634

See more details on using hashes here.

File details

Details for the file eb_evaluation-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: eb_evaluation-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 57.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for eb_evaluation-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f1c8133cb2aa4f254f8a6ffeb3be6311758e2bb17e46d4fb3ab63c02c3e6061c
MD5 6d0b7f3f2ef82d057fe163f00cce2187
BLAKE2b-256 ecde7132a2abaaf739b00fa6b45d8b4eb1e7c8fd03817fa34f1bef7379865f94

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page