Electric Barometer: DataFrame-based evaluation utilities for CWSL and related metrics.
Project description
Electric Barometer · Evaluation (eb-evaluation)
Evaluation and model selection utilities for applying Electric Barometer metrics across entities, groups, and operational contexts.
Overview
eb-evaluation provides the evaluation and model selection layer of the Electric Barometer ecosystem. It applies metric primitives to forecasts and observations across entities, groups, and hierarchical structures, enabling consistent assessment of forecasting performance in operational settings.
The package focuses on DataFrame-first evaluation workflows, including tolerance-based scoring, cost-sensitive comparison, and readiness-oriented adjustment logic. It does not define feature construction or model interfaces; instead, it consumes standardized inputs from upstream layers and produces evaluation outputs that can be used for model selection, reporting, and decision support.
Role in the Electric Barometer Ecosystem
eb-evaluation defines the evaluation and model selection layer used throughout the Electric Barometer ecosystem. It is responsible for applying metric primitives to forecasts and observations across entities, groups, and hierarchies, enabling consistent comparison of forecasting performance in operational contexts.
This package focuses exclusively on evaluation logic, aggregation semantics, and selection workflows. It does not perform feature construction, model training, or metric definition. Those responsibilities are handled by adjacent layers that generate inputs, adapt model interfaces, or define metric behavior.
By separating evaluation orchestration from metric semantics and model implementation details, eb-evaluation provides a stable, DataFrame-first foundation for decision-aligned model comparison and readiness assessment across heterogeneous forecasting pipelines.
Installation
eb-evaluation is distributed as a standard Python package.
pip install eb-evaluation
The package supports Python 3.10 and later.
Core Concepts
- DataFrame-first evaluation — Evaluation logic operates directly on tabular forecast and observation data, enabling transparent aggregation, grouping, and comparison across entities and hierarchies.
- Cost- and tolerance-aware scoring — Forecast performance is assessed using metrics that reflect asymmetric cost, acceptable deviation thresholds, and operational risk rather than purely symmetric statistical error.
- Hierarchical and panel semantics — Evaluation respects entity boundaries, grouping structure, and temporal alignment, ensuring correctness in multi-level forecasting environments.
- Model comparability — Forecasts produced by heterogeneous models can be evaluated and compared using a consistent set of metrics and aggregation rules.
- Readiness-oriented selection — Model selection emphasizes execution feasibility and operational adequacy, not just aggregate accuracy, supporting decision-aligned forecasting workflows.
Minimal Example
The example below shows how forecasts and observations can be evaluated and compared across entities using Electric Barometer metrics in a DataFrame-first workflow.
import pandas as pd
from eb_evaluation.dataframe.compare import compare_models
# Example evaluation data
df = pd.DataFrame({
"entity_id": ["A", "A", "B", "B"],
"date": pd.to_datetime(["2024-01-01", "2024-01-02"] * 2),
"actual": [10, 12, 7, 9],
"model_a": [9, 11, 8, 10],
"model_b": [11, 13, 6, 8],
})
# Compare models using a common evaluation contract
results = compare_models(
df,
actual_col="actual",
prediction_cols=["model_a", "model_b"],
entity_col="entity_id",
time_col="date",
)
print(results)
License
BSD 3-Clause License.
© 2025 Kyle Corrie.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file eb_evaluation-0.2.3.tar.gz.
File metadata
- Download URL: eb_evaluation-0.2.3.tar.gz
- Upload date:
- Size: 43.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2d9245c105bf4f4d0b633d625f7b3a8cb1862576e7c374546bae2a8a6d74dcc2
|
|
| MD5 |
4b70b40d40dad9b39d86dd0b52b1dedb
|
|
| BLAKE2b-256 |
19a0cd15aa28bc790d2e5f5535c31bf3a08f8c8d0f5b1fa568fb1f6d05d20146
|
Provenance
The following attestation bundles were made for eb_evaluation-0.2.3.tar.gz:
Publisher:
release.yml on Economistician/eb-evaluation
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
eb_evaluation-0.2.3.tar.gz -
Subject digest:
2d9245c105bf4f4d0b633d625f7b3a8cb1862576e7c374546bae2a8a6d74dcc2 - Sigstore transparency entry: 779550129
- Sigstore integration time:
-
Permalink:
Economistician/eb-evaluation@9ac2fb657c3537d127a0c9b9a90e2160e7ae6a06 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Economistician
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@9ac2fb657c3537d127a0c9b9a90e2160e7ae6a06 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file eb_evaluation-0.2.3-py3-none-any.whl.
File metadata
- Download URL: eb_evaluation-0.2.3-py3-none-any.whl
- Upload date:
- Size: 56.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
519b91aa46dec50ccd020a66ea63a36ea32356428bb800199ffabada86adf7dc
|
|
| MD5 |
c5fcf26aa7d88bf5fa467c86f01e2f47
|
|
| BLAKE2b-256 |
13370f85c802c04309fc541d2f7595077bed3181646acb11a6e48d5b90e0ce47
|
Provenance
The following attestation bundles were made for eb_evaluation-0.2.3-py3-none-any.whl:
Publisher:
release.yml on Economistician/eb-evaluation
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
eb_evaluation-0.2.3-py3-none-any.whl -
Subject digest:
519b91aa46dec50ccd020a66ea63a36ea32356428bb800199ffabada86adf7dc - Sigstore transparency entry: 779550131
- Sigstore integration time:
-
Permalink:
Economistician/eb-evaluation@9ac2fb657c3537d127a0c9b9a90e2160e7ae6a06 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Economistician
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@9ac2fb657c3537d127a0c9b9a90e2160e7ae6a06 -
Trigger Event:
workflow_dispatch
-
Statement type: