Skip to main content

A package for benchmarking the performance of arbitrary functions

Project description

Bencher

Continuous Integration Status

Ci Read the Docs Codecov GitHub issues GitHub pull-requests merged PyPI PyPI - Downloads License Python Pixi Badge

Install

pip install holobench

Intro

Bencher is a tool to make it easy to benchmark the interactions between the input parameters to your algorithm and its resulting performance on a set of metrics. It calculates the cartesian product of a set of variables

Parameters for bencher are defined using the param library as a config class with extra metadata that describes the bounds of the search space you want to measure. You must define a benchmarking function that accepts an instance of the config class and return a dictionary with string metric names and float values.

Parameters are benchmarked by passing in a list N parameters, and an N-Dimensional tensor is returned. You can optionally sample each point multiple times to get back a distribution and also track its value over time. By default the data will be plotted automatically based on the types of parameters you are sampling (e.g, continuous, discrete), but you can also pass in a callback to customize plotting.

The data is stored in a persistent database so that past performance is tracked.

Assumptions

The input types should also be of one of the basic datatypes (bool, int, float, str, enum, datetime) so that the data can be easily hashed, cached and stored in the database and processed with seaborn and xarray plotting functions. You can use class inheritance to define hierarchical parameter configuration class types that can be reused in a bigger configuration classes.

Bencher is designed to work with stochastic pure functions with no side effects. It assumes that when the objective function is given the same inputs, it will return the same output +- random noise. This is because the function must be called multiple times to get a good statistical distribution of it and so each call must not be influenced by anything or the results will be corrupted.

Pseudocode of bencher

Enumerate a list of all input parameter combinations
for each set of input parameters:
    pass the inputs to the objective function and store results in the N-D array

    get unique hash for the set of inputs parameters
    look up previous results for that hash
    if it exists:
        load historical data
        combine latest data with historical data
    
    store the results using the input hash as a key
deduce the type of plot based on the input and output types
return data and plot

Resource Management with sampling_context

If your benchmark holds external resources (DB pools, GPU handles, simulators) you may want to release them before the interactive result viewer starts. Wrapping the entire bn.run() call in a with block won't work — the context stays open while the Panel/Bokeh server blocks:

# Anti-pattern: resources held during the entire viewing session
with gpu_context():
    bn.run(my_bench, show=True)

Instead, pass the context manager as sampling_context. It wraps only the sampling phase; its __exit__ runs before the server starts:

bn.run(my_bench, show=True, sampling_context=gpu_context())

save and publish still execute inside the context (during sampling), so results are persisted before the resource is released.

Demo

if you have pixi installed you can run a demo example with:

pixi run demo

An example of the type of output bencher produces can be seen here:

https://blooop.github.io/bencher/

Examples

Most features are demonstrated in the auto-generated examples under bencher/example/generated/.

Run pixi run generate-docs to regenerate the full example gallery. Key sections include:

  • generated/N_float/ — Parameter sweeps with 0–3 float inputs, with/without repeats and over-time tracking
  • generated/plot_types/ — All supported plot types (scatter, line, heatmap, surface, etc.)
  • generated/result_types/ — Result types: images, videos, strings, booleans, paths, datasets
  • generated/composable_containers/ — Combining results with different composition strategies
  • generated/sampling/ — Custom values, levels, uniform, int vs float
  • generated/optimization/ — Single and multi-objective optimization with Optuna
  • generated/advanced/ — Time events, caching, aggregation over time
  • generated/regression/ — Performance regression detection
  • generated/statistics/ — Error bands, distributions, repeats comparison

A few hand-written examples remain for unique functionality:

  • example_simple_float.py — Minimal getting-started example
  • example_image.py / example_video.py — Image and video result types
  • example_self_benchmark.py — Bencher self-introspection
  • example_workflow.py — Multi-stage optimization workflow

Documentation

More documentation is needed for the examples and general workflow.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

holobench-1.91.0.tar.gz (304.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

holobench-1.91.0-py3-none-any.whl (521.6 kB view details)

Uploaded Python 3

File details

Details for the file holobench-1.91.0.tar.gz.

File metadata

  • Download URL: holobench-1.91.0.tar.gz
  • Upload date:
  • Size: 304.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for holobench-1.91.0.tar.gz
Algorithm Hash digest
SHA256 a3524e7730d9854b2d1c2c338c9e91c0d047eccf845e391d07954c5da0779759
MD5 b1b98566684c7b0c61867e828f7013d6
BLAKE2b-256 87f36b183f78e07124e7a0c7820258898bd63359a07869d3409b918e95676d2f

See more details on using hashes here.

File details

Details for the file holobench-1.91.0-py3-none-any.whl.

File metadata

  • Download URL: holobench-1.91.0-py3-none-any.whl
  • Upload date:
  • Size: 521.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for holobench-1.91.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b41a90d88fc32676b3344325690b1f0405c0f84d70dbd797c0513b03ac19dfb1
MD5 4c3409773e14b0050d74b8935e12fafb
BLAKE2b-256 2cbd7d9ff23e81123c028b5de049d123f09d783fc65c9924696c3b5e6740b7e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page