Skip to main content

A multidimensional benchmarking library with minimal overhead

Project description

zerobench

Zero-overhead Python Benchmarking

PyPI version Python versions Continuous integration Coverage

zerobench is a Python benchmarking library with zero overhead, designed for multidimensional performance analysis.

Features

  • Context manager API: Benchmark any code block with with bench(...): ...
  • Multidimensional: Tag benchmarks with arbitrary keyword arguments
  • Zero overhead: Code is passed directly to timeit.Timer, no wrapper function
  • Auto-scaling: Automatically determines the number of iterations for reliable measurements
  • Multiple exports: CSV, Parquet, Markdown
  • Plotting: Built-in visualization with matplotlib

Quick Example

from zerobench import Benchmark

bench = Benchmark()

for n in [100, 1000, 10000]:
    data = list(range(n))
    with bench(method='sum', n=n):
        sum(data)
    with bench(method='len', n=n):
        len(data)

Output:

method=sum, n=100: 0.579 us ± 2.38 ns (median ± std. dev. of 7 runs, 500000 loops each)
method=len, n=100: 0.020 us ± 0.45 ns (median ± std. dev. of 7 runs, 20000000 loops each)
method=sum, n=1000: 5.369 us ± 44.70 ns (median ± std. dev. of 7 runs, 50000 loops each)
method=len, n=1000: 0.029 us ± 0.09 ns (median ± std. dev. of 7 runs, 10000000 loops each)
method=sum, n=10000: 53.728 us ± 69.86 ns (median ± std. dev. of 7 runs, 5000 loops each)
method=len, n=10000: 0.029 us ± 0.25 ns (median ± std. dev. of 7 runs, 10000000 loops each)
print(bench)
┌────────┬────────┬─────────────────────────────────┐
│ method ┆ n      ┆ execution_times                 │
╞════════╪════════╪═════════════════════════════════╡
│ sum    ┆ 100    ┆ [0.577805, 0.57815, … 0.581231… │
│ len    ┆ 100    ┆ [0.019207, 0.019278, … 0.01958… │
│ sum    ┆ 1_000  ┆ [5.417795, 5.33863, … 5.35146]  │
│ len    ┆ 1_000  ┆ [0.028898, 0.030144, … 0.03007… │
│ sum    ┆ 10_000 ┆ [53.743199, 53.664567, … 53.72… │
│ len    ┆ 10_000 ┆ [0.028857, 0.028911, … 0.02942… │
└────────┴────────┴─────────────────────────────────┘

JAX Support

ZeroBench automatically detects JAX arrays and optimizes benchmarking accordingly:

import jax.numpy as jnp
from zerobench import Benchmark

bench = Benchmark()
x = jnp.ones(1000)
y = jnp.ones(1000)

with bench(method='add'):
    x + y

When JAX code is detected, zerobench:

  1. Wraps the code in a JIT-compiled function to measure optimized execution
  2. Separates compilation from execution by reporting compilation_time separately
  3. Captures the StableHLO representation of the compiled function in the hlo field
  4. Uses jax.block_until_ready to ensure accurate timing of asynchronous operations

The benchmark report includes additional fields for JAX:

  • first_execution_time: Time of the initial (possibly uncompiled) execution
  • compilation_time: Time to lower and compile the function
  • hlo: The StableHLO text representation of the compiled computation
report = bench.to_dicts()[0]
print(report['compilation_time'])  # e.g., 12345.67 ns
print(report['hlo'][:100])         # HLO module "jit___bench_func" ...

Installation

pip install zerobench

Export and Visualization

# Export results
bench.write_csv('results.csv')
bench.write_parquet('results.parquet')
bench.write_markdown('results.md')

# Plot results
bench.plot()
bench.write_plot('results.pdf')

Configuration

Benchmark(
    repeat=7,                    # Number of measurement repetitions
    min_duration_of_repeat=0.2,  # Minimum duration per repeat (seconds)
    time_units='ns',             # Time units: 'ns', 'us', 'ms', 's'
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zerobench-0.5.tar.gz (217.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zerobench-0.5-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file zerobench-0.5.tar.gz.

File metadata

  • Download URL: zerobench-0.5.tar.gz
  • Upload date:
  • Size: 217.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.5.tar.gz
Algorithm Hash digest
SHA256 68990f51ad7269237145a23f145221bda9e4985ab8d900209c8798f5b65493bc
MD5 fd19906ee888c1fb8bc612597db53468
BLAKE2b-256 62ed708d02d5b7fbad4e7d67f4a7b242e557c8372e2bbc68630955c2b104d383

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.5.tar.gz:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zerobench-0.5-py3-none-any.whl.

File metadata

  • Download URL: zerobench-0.5-py3-none-any.whl
  • Upload date:
  • Size: 13.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d65fd802927c1ebd63fd2861499077cef0a8136c0df6d111eb5bab30927e9d81
MD5 1f7d0bd183fcef699baa73c432d09655
BLAKE2b-256 1f33f11d6f86b977ed7238978477df43a030977583125e65b2ab362271794b7c

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.5-py3-none-any.whl:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page