Skip to main content

Energy forecast benchmarking toolkit.

Project description

Energy Forecast Benchmark Toolkit

PyPI version Hatch project code style - Black linting - Ruff types - Mypy

Energy Forecast Benchmark Toolkit is a Python project that aims to provide common tools to benchmark forecast models.

Table of Contents

Installation

Use the package manager pip to install foobar.

pip install enfobench

Usage

Load your own data and create a dataset.

import pandas as pd

from enfobench.dataset import Dataset

# Load your datasets
data = pd.read_csv("../path/to/your/data.csv", parse_dates=['timestamp'], index_col='timestamp')

# Create a target DataFrame that has a pd.DatetimeIndex and a column named 'y'
target = data.loc[:, ['target_column']].rename(columns={'target_column': 'y'})

# Add covariates that can be used as past covariates. This also has to have a pd.DatetimeIndex
past_covariates = data.loc[:, ['covariate_1', 'covariate_2']]

# As sometimes it can be challenging to access historical forecasts to use future covariates, 
# the package also has a helper function to create perfect historical forecasts from the past covariates.
from enfobench.dataset.utils import create_perfect_forecasts_from_covariates

# The example below creates simulated perfect historical forecasts with a horizon of 24 hours and a step of 1 day.
future_covariates = create_perfect_forecasts_from_covariates(
    past_covariates,
    horizon=pd.Timedelta("24 hours"),
    step=pd.Timedelta("1 day"),
)

dataset = Dataset(
    target=data['target_column'],
    past_covariates=past_covariates,
    future_covariates=future_covariates,
)

The package integrates with the HuggingFace Dataset 'attila-balint-kul/electricity-demand'. To use this, just download all the files from the data folder to your computer.

from enfobench.dataset import Dataset, DemandDataset

# Load the dataset from the folder that you downloaded the files to.
ds = DemandDataset("/path/to/the/dataset/folder/that/contains/all/subsets")

# List all meter ids
ds.metadata_subset.list_unique_ids()

# Get dataset for a specific meter id
target, past_covariates, metadata = ds.get_data_by_unique_id("unique_id_of_the_meter")

# Create a dataset
dataset = Dataset(
    target=target,
    past_covariates=past_covariates,
    future_covariates=None,
    metadata=metadata
)

You can perform a cross validation on any model locally that adheres to the enfobench.Model protocol.

import MyModel
import pandas as pd
from enfobench.evaluation import cross_validate

# Import your model and instantiate it
model = MyModel()

# Run cross validation on your model
cv_results = cross_validate(
    model,
    dataset,
    start_date=pd.Timestamp("2018-01-01"),
    end_date=pd.Timestamp("2018-01-31"),
    horizon=pd.Timedelta("24 hours"),
    step=pd.Timedelta("1 day"),
)

You can use the same crossvalidation interface with your model served behind an API. To make this simple, both a client and a server are provided.

import pandas as pd
from enfobench.evaluation import cross_validate, ForecastClient

# Import your model and instantiate it
client = ForecastClient(host='localhost', port=3000)

# Run cross validation on your model
cv_results = cross_validate(
    client,
    dataset,
    start_date=pd.Timestamp("2018-01-01"),
    end_date=pd.Timestamp("2018-01-31"),
    horizon=pd.Timedelta("24 hours"),
    step=pd.Timedelta("1 day"),
)

The package also collects common metrics used in forecasting.

from enfobench.evaluation import evaluate_metrics

from enfobench.evaluation.metrics import (
    mean_bias_error,
    mean_absolute_error,
    mean_squared_error,
    root_mean_squared_error,
)

# Simply pass in the cross validation results and the metrics you want to evaluate.
metrics = evaluate_metrics(
    cv_results,
    metrics={
        "mean_bias_error": mean_bias_error,
        "mean_absolute_error": mean_absolute_error,
        "mean_squared_error": mean_squared_error,
        "root_mean_squared_error": root_mean_squared_error,
    },
)

In order to serve your model behind an API, you can use the built in server factory.

import uvicorn
from enfobench.evaluation.server import server_factory

model = MyModel()

# Create a server that serves your model
server = server_factory(model)
uvicorn.run(server, port=3000)

Benchmarking

The package also provides a benchmarking framework that can be used to benchmark your model against other models. There are some example models in this repository.

The results of the benchmarking are openly accessible here.

Contributing

Contributions and feedback are welcome! For major changes, please open an issue first to discuss what you would like to change.

If you'd like to contribute to the project, please follow these steps:

Fork the repository. Create a new branch for your feature or bug fix. Make your changes and commit them. Push your changes to your forked repository. Submit a pull request describing your changes.

License

BSD 2-Clause License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

enfobench-0.3.3.tar.gz (22.9 kB view details)

Uploaded Source

Built Distribution

enfobench-0.3.3-py3-none-any.whl (16.8 kB view details)

Uploaded Python 3

File details

Details for the file enfobench-0.3.3.tar.gz.

File metadata

  • Download URL: enfobench-0.3.3.tar.gz
  • Upload date:
  • Size: 22.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for enfobench-0.3.3.tar.gz
Algorithm Hash digest
SHA256 ea09f530b504467053b134b4fda0c2e8ebcde6dc44abd82c3d7de92a4789c681
MD5 f8e47de0e386eed7c1494218afec0ce0
BLAKE2b-256 59433f42b71b23229fce7c726cd9e93243ef840be06fbe7b49652a915eaff8fa

See more details on using hashes here.

File details

Details for the file enfobench-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: enfobench-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 16.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for enfobench-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 417740ec192fe1f0bec813472652006c72d0e83453b525bd26118aaebf27b341
MD5 caf45c75e3bfd5de62725f83b5c9a23c
BLAKE2b-256 20c6a2559a2d634e08fdd23a605ded4923d408910a57a082acf50bebd541b971

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page