Electric Barometer: DataFrame-based evaluation utilities for CWSL and related metrics.
Project description
Electric Barometer Evaluation (eb-evaluation)
This repository contains the evaluation and orchestration layer of the Electric Barometer ecosystem.
eb-evaluation sits above core metric implementations (eb-metrics) and
provides structured tools for applying Electric Barometer concepts to
real-world forecasting workflows, including readiness adjustment, model
comparison, sensitivity analysis, and dataframe-based evaluation.
Conceptual definitions and theoretical framing for the evaluation logic are
maintained in the companion research repository:
eb-papers.
Naming convention
Electric Barometer packages follow standard Python packaging conventions:
- Distribution names (used with
pip install) use hyphens
e.g.pip install eb-evaluation - Python import paths use underscores
e.g.import eb_evaluation
This distinction is intentional and consistent across the Electric Barometer ecosystem.
Role Within Electric Barometer
Within the Electric Barometer ecosystem:
eb-papersdefines concepts, frameworks, and meaningeb-metricsimplements individual metricseb-evaluationorchestrates how metrics are applied, combined, and interpreted
This repository focuses on evaluation logic, not raw metric computation.
What This Library Provides
- Readiness adjustment logic for modifying evaluation outputs based on operational readiness signals
- Model selection and comparison utilities grounded in asymmetric loss and readiness-aware metrics
- Sensitivity and tolerance analysis for cost ratios and service thresholds
- DataFrame-oriented evaluation tools for entity-level and time-based analysis
- Feature engineering utilities to support evaluation pipelines
Scope
This repository focuses on evaluation workflows and orchestration, not low-level metric definitions.
In scope:
- Applying EB metrics to datasets and model outputs
- Combining metrics into readiness-aware evaluation artifacts
- Model comparison and selection logic
- Sensitivity analysis and tolerance handling
Out of scope:
- Metric definitions and loss formulations (see
eb-metrics) - Conceptual frameworks and theory (see
eb-papers) - Model training or forecasting algorithms
Installation
Once published, the package will be installable via PyPI:
pip install eb-evaluation
For development or local use:
pip install -e .
Package Structure
The repository follows a modern Python package layout:
eb-evaluation/
├── src/eb_evaluation/
│ ├── adjustment/ # Readiness and evaluation adjustments
│ ├── dataframe/ # DataFrame-based evaluation utilities
│ ├── features/ # Feature engineering helpers
│ ├── model_selection/ # Model comparison and selection logic
│ └── utils/ # Shared validation and helpers
│
├── tests/ # Unit tests mirroring package structure
├── pyproject.toml # Build and dependency configuration
├── README.md # Project documentation
└── LICENSE # BSD-3-Clause license
Relationship to Other EB Repositories
-
eb-papers
Source of truth for conceptual definitions and evaluation philosophy. -
eb-metrics
Provides the metric implementations used during evaluation. -
eb-evaluation
Orchestrates evaluation workflows using adapted models. -
eb-adapters
Ensures heterogeneous models can be evaluated consistently.
When discrepancies arise, conceptual intent in eb-papers should be treated as authoritative.
Development and Testing
Tests are located under the tests/ directory and mirror the package structure.
To run the test suite:
pytest
Status
This package is under active development. Public APIs may evolve prior to the first stable release.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file eb_evaluation-0.1.1.tar.gz.
File metadata
- Download URL: eb_evaluation-0.1.1.tar.gz
- Upload date:
- Size: 47.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b92680d20737e7a2f33cb9b0da24119692a84470ebb87f2ca6ef15cb2fddff66
|
|
| MD5 |
822497e373a0a58ce8ea40a67936cd74
|
|
| BLAKE2b-256 |
ce1afb910a36e1bd61d4c2266dbd01382da360d72524e0c2268e149d9fced041
|
File details
Details for the file eb_evaluation-0.1.1-py3-none-any.whl.
File metadata
- Download URL: eb_evaluation-0.1.1-py3-none-any.whl
- Upload date:
- Size: 62.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92984c5d8080a3d5b5ad8c8e1242abf6b851b8f0a836a04a1e0cc957ec57ada9
|
|
| MD5 |
a13cff4ca4c87ce7d83ec37844bb9062
|
|
| BLAKE2b-256 |
2aa4343b42939ed5954ed13d1504318d5eadc38a2914aea8ca8864bfd9e2151d
|