Skip to main content

A multidimensional benchmarking library with minimal overhead

Project description

zerobench

Zero-overhead Python Benchmarking

PyPI version Python versions Continuous integration Coverage

zerobench is a Python benchmarking library with zero overhead, designed for multidimensional performance analysis.

Features

  • Context manager API: Benchmark any code block with with bench(...): ...
  • Multidimensional: Tag benchmarks with arbitrary keyword arguments
  • Zero overhead: Code is passed directly to timeit.Timer, no wrapper function
  • Auto-scaling: Automatically determines the number of iterations for reliable measurements
  • Multiple exports: CSV, Parquet, Markdown
  • Plotting: Built-in visualization with matplotlib

Quick Example

from zerobench import Benchmark

bench = Benchmark()

for n in [100, 1000, 10000]:
    data = list(range(n))
    with bench(method='sum', n=n):
        sum(data)
    with bench(method='len', n=n):
        len(data)

Output:

method=sum, n=100: 0.579 us ± 2.38 ns (median ± std. dev. of 7 runs, 500000 loops each)
method=len, n=100: 0.020 us ± 0.45 ns (median ± std. dev. of 7 runs, 20000000 loops each)
method=sum, n=1000: 5.369 us ± 44.70 ns (median ± std. dev. of 7 runs, 50000 loops each)
method=len, n=1000: 0.029 us ± 0.09 ns (median ± std. dev. of 7 runs, 10000000 loops each)
method=sum, n=10000: 53.728 us ± 69.86 ns (median ± std. dev. of 7 runs, 5000 loops each)
method=len, n=10000: 0.029 us ± 0.25 ns (median ± std. dev. of 7 runs, 10000000 loops each)
print(bench)
┌────────┬────────┬─────────────────────────────────┐
│ method ┆ n      ┆ execution_times                 │
╞════════╪════════╪═════════════════════════════════╡
│ sum    ┆ 100    ┆ [0.577805, 0.57815, … 0.581231… │
│ len    ┆ 100    ┆ [0.019207, 0.019278, … 0.01958… │
│ sum    ┆ 1_000  ┆ [5.417795, 5.33863, … 5.35146]  │
│ len    ┆ 1_000  ┆ [0.028898, 0.030144, … 0.03007… │
│ sum    ┆ 10_000 ┆ [53.743199, 53.664567, … 53.72… │
│ len    ┆ 10_000 ┆ [0.028857, 0.028911, … 0.02942… │
└────────┴────────┴─────────────────────────────────┘

JAX Support

zerobench automatically detects JAX arrays and optimizes benchmarking accordingly:

import jax.numpy as jnp
from zerobench import Benchmark

bench = Benchmark()
x = jnp.ones(1000)
y = jnp.ones(1000)

with bench(method='add'):
    x + y

When JAX code is detected, zerobench:

  1. Wraps the code in a JIT-compiled function to measure optimized execution
  2. Separates compilation from execution by reporting compilation_time separately
  3. Captures the StableHLO representation of the compiled function in the hlo field
  4. Uses jax.block_until_ready to ensure accurate timing of asynchronous operations

The benchmark report includes additional fields for JAX:

  • first_execution_time: Time of the initial (possibly uncompiled) execution
  • compilation_time: Time to lower and compile the function
  • hlo: The StableHLO text representation of the compiled computation
report = bench.to_dicts()[0]
print(report['compilation_time'])  # e.g., 12345.67 ns
print(report['hlo'][:100])         # HLO module "jit___bench_func" ...

Installation

pip install zerobench

Export and Visualization

# Export results
bench.write_csv('results.csv')
bench.write_parquet('results.parquet')
bench.write_markdown('results.md')

# Plot results
bench.plot()
bench.write_plot('results.pdf')

Configuration

Benchmark(
    repeat=7,                    # Number of measurement repetitions
    min_duration_of_repeat=0.2,  # Minimum duration per repeat (seconds)
    time_units='ns',             # Time units: 'ns', 'us', 'ms', 's'
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zerobench-0.3.tar.gz (195.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zerobench-0.3-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file zerobench-0.3.tar.gz.

File metadata

  • Download URL: zerobench-0.3.tar.gz
  • Upload date:
  • Size: 195.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.3.tar.gz
Algorithm Hash digest
SHA256 45f5dafc2c3a4053bca92d3da0989027c720e2da74a7e6cd5e3241027adff8ef
MD5 00d691c55d520634ffd2565143d187bc
BLAKE2b-256 5891a474f1d0bdee7147ba10a594babcae9ae36b28ff5f31f13a593016fb388b

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.3.tar.gz:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zerobench-0.3-py3-none-any.whl.

File metadata

  • Download URL: zerobench-0.3-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b6b798c9c4e2e36c7646c2f621026ebea7d9d661f18ee017db9f0797a7c468f6
MD5 759fa3cae9e5a0e4a7309a499f628050
BLAKE2b-256 9368df7f7bc9c34afb055bb1882c57471ebfd0516d3e0ad8c09d94c6279de8b6

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.3-py3-none-any.whl:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page