Skip to main content

Electric Barometer: DataFrame-based evaluation utilities for CWSL and related metrics.

Project description

Electric Barometer · Evaluation (eb-evaluation)

CI License: BSD-3-Clause Python Versions PyPI

Evaluation and model selection utilities for applying Electric Barometer metrics across entities, groups, and operational contexts.


Overview

eb-evaluation provides the evaluation and model selection layer of the Electric Barometer ecosystem. It applies metric primitives to forecasts and observations across entities, groups, and hierarchical structures, enabling consistent assessment of forecasting performance in operational settings.

The package focuses on DataFrame-first evaluation workflows, including cost-sensitive comparison, tolerance-aware scoring given explicit thresholds, and readiness-oriented adjustment logic. It does not define feature construction or model interfaces; instead, it consumes standardized inputs from upstream layers and produces evaluation outputs that can be used for model selection, reporting, and decision support.


Role in the Electric Barometer Ecosystem

eb-evaluation defines the evaluation and model selection layer used throughout the Electric Barometer ecosystem. It is responsible for applying metric primitives to forecasts and observations across entities, groups, and hierarchies, enabling consistent comparison of forecasting performance in operational contexts.

This package focuses exclusively on evaluation logic, aggregation semantics, and selection workflows. It does not perform feature construction, model training, or metric definition. Those responsibilities are handled by adjacent layers that generate inputs, adapt model interfaces, or define metric behavior.

By separating evaluation orchestration from metric semantics and model implementation details, eb-evaluation provides a stable, DataFrame-first foundation for decision-aligned model comparison and readiness assessment across heterogeneous forecasting pipelines.


Installation

eb-evaluation is distributed as a standard Python package.

pip install eb-evaluation

The package supports Python 3.10 and later.


Core Concepts

  • DataFrame-first evaluation — Evaluation logic operates directly on tabular forecast and observation data, enabling transparent aggregation, grouping, and comparison across entities and hierarchies.
  • Cost- and tolerance-aware scoring — Forecast performance is assessed using metrics that reflect asymmetric cost and explicitly supplied deviation thresholds, rather than purely symmetric statistical error.
  • Hierarchical and panel semantics — Evaluation respects entity boundaries, grouping structure, and temporal alignment, ensuring correctness in multi-level forecasting environments.
  • Model comparability — Forecasts produced by heterogeneous models can be evaluated and compared using a consistent set of metrics and aggregation rules.
  • Readiness-oriented selection — Model selection emphasizes execution feasibility and operational adequacy as reflected in evaluation metrics, not just aggregate accuracy, supporting decision-aligned forecasting workflows.

Minimal Example

The example below shows how forecast accuracy can be evaluated across entities using Electric Barometer metrics in a DataFrame-first workflow.

import pandas as pd
from eb_evaluation.dataframe import compute_cwsl_df

# Example evaluation data
df = pd.DataFrame({
    "entity_id": ["A", "A", "B", "B"],
    "date": pd.to_datetime(["2024-01-01", "2024-01-02"] * 2),
    "actual": [10, 12, 7, 9],
    "prediction": [9, 11, 8, 10],
})

# Compute Cost-Weighted Service Loss (CWSL)
results = compute_cwsl_df(
    df,
    actual_col="actual",
    prediction_col="prediction",
    entity_col="entity_id",
    time_col="date",
)

print(results)

License

BSD 3-Clause License. © 2025 Kyle Corrie.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eb_evaluation-0.2.7.tar.gz (73.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eb_evaluation-0.2.7-py3-none-any.whl (90.4 kB view details)

Uploaded Python 3

File details

Details for the file eb_evaluation-0.2.7.tar.gz.

File metadata

  • Download URL: eb_evaluation-0.2.7.tar.gz
  • Upload date:
  • Size: 73.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eb_evaluation-0.2.7.tar.gz
Algorithm Hash digest
SHA256 7e5d468e2e1b42f4618388173ea38b3d31b8f6e85728279986010f3599c9309a
MD5 88998fb15f05fc30932efaa42b46dd52
BLAKE2b-256 9a32f4831c44f6bc4a5de5ff1bac563d5327000f39faeff6e758e45084461f9a

See more details on using hashes here.

File details

Details for the file eb_evaluation-0.2.7-py3-none-any.whl.

File metadata

  • Download URL: eb_evaluation-0.2.7-py3-none-any.whl
  • Upload date:
  • Size: 90.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eb_evaluation-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 e2b85138ae8f3e06bbfe9738841d3eb3e974c99303063e49f8a3a48579935533
MD5 92a2e92530b2f01e26f011dc23b9c226
BLAKE2b-256 5c341a3d5cd1cbb62a978fd31cd55486fff601975daea3c919bfcdbf7ed6c031

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page