Skip to main content

A multidimensional benchmarking library with minimal overhead

Project description

zerobench

Zero-overhead Python Benchmarking

PyPI version Python versions Continuous integration Coverage

zerobench is a Python benchmarking library with zero overhead, designed for multidimensional performance analysis.

Features

  • Context manager API: Benchmark any code block with with bench(...): ...
  • Multidimensional: Tag benchmarks with arbitrary keyword arguments
  • Zero overhead: Code is passed directly to timeit.Timer, no wrapper function
  • Auto-scaling: Automatically determines the number of iterations for reliable measurements
  • Multiple exports: CSV, Parquet, Markdown
  • Plotting: Built-in visualization with matplotlib

Quick Example

from zerobench import Benchmark

bench = Benchmark()

for n in [100, 1000, 10000]:
    data = list(range(n))
    with bench(method='sum', n=n):
        sum(data)
    with bench(method='len', n=n):
        len(data)

Output:

method=sum, n=100: 0.579 us ± 2.38 ns (median ± std. dev. of 7 runs, 500000 loops each)
method=len, n=100: 0.020 us ± 0.45 ns (median ± std. dev. of 7 runs, 20000000 loops each)
method=sum, n=1000: 5.369 us ± 44.70 ns (median ± std. dev. of 7 runs, 50000 loops each)
method=len, n=1000: 0.029 us ± 0.09 ns (median ± std. dev. of 7 runs, 10000000 loops each)
method=sum, n=10000: 53.728 us ± 69.86 ns (median ± std. dev. of 7 runs, 5000 loops each)
method=len, n=10000: 0.029 us ± 0.25 ns (median ± std. dev. of 7 runs, 10000000 loops each)
print(bench)
┌────────┬────────┬─────────────────────────────────┐
│ method ┆ n      ┆ execution_times                 │
╞════════╪════════╪═════════════════════════════════╡
│ sum    ┆ 100    ┆ [0.577805, 0.57815, … 0.581231… │
│ len    ┆ 100    ┆ [0.019207, 0.019278, … 0.01958… │
│ sum    ┆ 1_000  ┆ [5.417795, 5.33863, … 5.35146]  │
│ len    ┆ 1_000  ┆ [0.028898, 0.030144, … 0.03007… │
│ sum    ┆ 10_000 ┆ [53.743199, 53.664567, … 53.72… │
│ len    ┆ 10_000 ┆ [0.028857, 0.028911, … 0.02942… │
└────────┴────────┴─────────────────────────────────┘

JAX Support

zerobench automatically detects JAX arrays and optimizes benchmarking accordingly:

import jax.numpy as jnp
from zerobench import Benchmark

bench = Benchmark()
x = jnp.ones(1000)
y = jnp.ones(1000)

with bench(method='add'):
    x + y

When JAX code is detected, zerobench:

  1. Wraps the code in a JIT-compiled function to measure optimized execution
  2. Separates compilation from execution by reporting compilation_time separately
  3. Captures the StableHLO representation of the compiled function in the hlo field
  4. Uses jax.block_until_ready to ensure accurate timing of asynchronous operations

The benchmark report includes additional fields for JAX:

  • first_execution_time: Time of the initial (possibly uncompiled) execution
  • compilation_time: Time to lower and compile the function
  • hlo: The StableHLO text representation of the compiled computation
report = bench.to_dicts()[0]
print(report['compilation_time'])  # e.g., 12345.67 ns
print(report['hlo'][:100])         # HLO module "jit___bench_func" ...

Installation

pip install zerobench

Export and Visualization

# Export results
bench.write_csv('results.csv')
bench.write_parquet('results.parquet')
bench.write_markdown('results.md')

# Plot results
bench.plot()
bench.write_plot('results.pdf')

Configuration

Benchmark(
    repeat=7,                    # Number of measurement repetitions
    min_duration_of_repeat=0.2,  # Minimum duration per repeat (seconds)
    time_units='ns',             # Time units: 'ns', 'us', 'ms', 's'
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zerobench-0.2.tar.gz (183.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zerobench-0.2-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file zerobench-0.2.tar.gz.

File metadata

  • Download URL: zerobench-0.2.tar.gz
  • Upload date:
  • Size: 183.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.2.tar.gz
Algorithm Hash digest
SHA256 7478e2d2767adebbbba57cf836c4671a867a558c0f89ff22598366abef94855f
MD5 fa7be31112fbcb79e14a7c92d04a6df0
BLAKE2b-256 fc9c70166a8dda1c86ef36f76326bfcd683c268f412b122517d91b07090da5d7

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.2.tar.gz:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zerobench-0.2-py3-none-any.whl.

File metadata

  • Download URL: zerobench-0.2-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ed9ef1378793d8b93be5304c54d5b997d9038f4b11d8eb663186eb9945a4eb7e
MD5 3350238abf1755908f85f359e1f27106
BLAKE2b-256 e3cdbe86e49a91854a2b52d4b5b591431537a287a17eb592155f7e0310103b48

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.2-py3-none-any.whl:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page