Skip to main content

Electric Barometer: DataFrame-based evaluation utilities for CWSL and related metrics.

Project description

Electric Barometer Evaluation (eb-evaluation)

License: BSD-3-Clause Python Versions Docs Project Status

This repository contains the evaluation and orchestration layer of the Electric Barometer ecosystem.

eb-evaluation sits above core metric implementations (eb-metrics) and provides structured tools for applying Electric Barometer concepts to real-world forecasting workflows, including readiness adjustment, model comparison, sensitivity analysis, and dataframe-based evaluation.

Conceptual definitions and theoretical framing for the evaluation logic are maintained in the companion research repository: eb-papers.


Naming convention

Electric Barometer packages follow standard Python packaging conventions:

  • Distribution names (used with pip install) use hyphens
    e.g. pip install eb-evaluation
  • Python import paths use underscores
    e.g. import eb_evaluation

This distinction is intentional and consistent across the Electric Barometer ecosystem.


Role Within Electric Barometer

Within the Electric Barometer ecosystem:

  • eb-papers defines concepts, frameworks, and meaning
  • eb-metrics implements individual metrics
  • eb-evaluation orchestrates how metrics are applied, combined, and interpreted

This repository focuses on evaluation logic, not raw metric computation.


What This Library Provides

  • Readiness adjustment logic for modifying evaluation outputs based on operational readiness signals
  • Model selection and comparison utilities grounded in asymmetric loss and readiness-aware metrics
  • Sensitivity and tolerance analysis for cost ratios and service thresholds
  • DataFrame-oriented evaluation tools for entity-level and time-based analysis
  • Feature engineering utilities to support evaluation pipelines

Scope

This repository focuses on evaluation workflows and orchestration, not low-level metric definitions.

In scope:

  • Applying EB metrics to datasets and model outputs
  • Combining metrics into readiness-aware evaluation artifacts
  • Model comparison and selection logic
  • Sensitivity analysis and tolerance handling

Out of scope:

  • Metric definitions and loss formulations (see eb-metrics)
  • Conceptual frameworks and theory (see eb-papers)
  • Model training or forecasting algorithms

Installation

Once published, the package will be installable via PyPI:

pip install eb-evaluation

For development or local use:

pip install -e .

Package Structure

The repository follows a modern Python package layout:

eb-evaluation/
├── src/eb_evaluation/
│   ├── adjustment/        # Readiness and evaluation adjustments
│   ├── dataframe/         # DataFrame-based evaluation utilities
│   ├── features/          # Feature engineering helpers
│   ├── model_selection/   # Model comparison and selection logic
│   └── utils/              # Shared validation and helpers
│
├── tests/                  # Unit tests mirroring package structure
├── pyproject.toml          # Build and dependency configuration
├── README.md               # Project documentation
└── LICENSE                 # BSD-3-Clause license

Relationship to Other EB Repositories

  • eb-papers
    Source of truth for conceptual definitions and evaluation philosophy.

  • eb-metrics
    Provides the metric implementations used during evaluation.

  • eb-evaluation
    Orchestrates evaluation workflows using adapted models.

  • eb-adapters
    Ensures heterogeneous models can be evaluated consistently.

When discrepancies arise, conceptual intent in eb-papers should be treated as authoritative.


Development and Testing

Tests are located under the tests/ directory and mirror the package structure.

To run the test suite:

pytest

Status

This package is under active development. Public APIs may evolve prior to the first stable release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eb_evaluation-0.2.0.tar.gz (43.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eb_evaluation-0.2.0-py3-none-any.whl (57.3 kB view details)

Uploaded Python 3

File details

Details for the file eb_evaluation-0.2.0.tar.gz.

File metadata

  • Download URL: eb_evaluation-0.2.0.tar.gz
  • Upload date:
  • Size: 43.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for eb_evaluation-0.2.0.tar.gz
Algorithm Hash digest
SHA256 fe9b0b223967db40f1038ab6350ec964eaeb457803e429e16869ef2de9ddc29d
MD5 c8cdb41c4cbd6cdefca7354ae046ef47
BLAKE2b-256 aa79a1dcad720a234fcf65f337592ee755da777c4875c50df017557e273c26ab

See more details on using hashes here.

File details

Details for the file eb_evaluation-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: eb_evaluation-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 57.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for eb_evaluation-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 33cc8fc1cd2c3f1055bf311698967e12a6e91fdac9e3650688caecdfdeb3918c
MD5 c17eb21b845b62c0079de417fa9c5c22
BLAKE2b-256 4f6a42f35168e2fa6ff82eeecf800f76b9b7040ebdfc1ae722818e2d38328326

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page