Skip to main content

A multidimensional benchmarking library with minimal overhead

Project description

zerobench

Zero-overhead Python Benchmarking

PyPI version Python versions Continuous integration Coverage

zerobench is a Python benchmarking library with zero overhead, designed for multidimensional performance analysis.

Features

  • Context manager API: Benchmark any code block with with bench(...): ...
  • Multidimensional: Tag benchmarks with arbitrary keyword arguments
  • Zero overhead: Code is passed directly to timeit.Timer, no wrapper function
  • Auto-scaling: Automatically determines the number of iterations for reliable measurements
  • Multiple exports: CSV, Parquet, Markdown
  • Plotting: Built-in visualization with matplotlib

Quick Example

from zerobench import Benchmark

bench = Benchmark()

for n in [100, 1000, 10000]:
    data = list(range(n))
    with bench(method='sum', n=n):
        sum(data)
    with bench(method='len', n=n):
        len(data)

Output:

method=sum, n=100: 0.579 us ± 2.38 ns (median ± std. dev. of 7 runs, 500000 loops each)
method=len, n=100: 0.020 us ± 0.45 ns (median ± std. dev. of 7 runs, 20000000 loops each)
method=sum, n=1000: 5.369 us ± 44.70 ns (median ± std. dev. of 7 runs, 50000 loops each)
method=len, n=1000: 0.029 us ± 0.09 ns (median ± std. dev. of 7 runs, 10000000 loops each)
method=sum, n=10000: 53.728 us ± 69.86 ns (median ± std. dev. of 7 runs, 5000 loops each)
method=len, n=10000: 0.029 us ± 0.25 ns (median ± std. dev. of 7 runs, 10000000 loops each)
print(bench)
┌────────┬────────┬─────────────────────────────────┐
│ method ┆ n      ┆ execution_times                 │
╞════════╪════════╪═════════════════════════════════╡
│ sum    ┆ 100    ┆ [0.577805, 0.57815, … 0.581231… │
│ len    ┆ 100    ┆ [0.019207, 0.019278, … 0.01958… │
│ sum    ┆ 1_000  ┆ [5.417795, 5.33863, … 5.35146]  │
│ len    ┆ 1_000  ┆ [0.028898, 0.030144, … 0.03007… │
│ sum    ┆ 10_000 ┆ [53.743199, 53.664567, … 53.72… │
│ len    ┆ 10_000 ┆ [0.028857, 0.028911, … 0.02942… │
└────────┴────────┴─────────────────────────────────┘

JAX Support

ZeroBench automatically detects JAX arrays and optimizes benchmarking accordingly:

import jax.numpy as jnp
from zerobench import Benchmark

bench = Benchmark()
x = jnp.ones(1000)
y = jnp.ones(1000)

with bench(method='add'):
    x + y

When JAX code is detected, zerobench:

  1. Wraps the code in a JIT-compiled function to measure optimized execution
  2. Separates compilation from execution by reporting compilation_time separately
  3. Captures the StableHLO representation of the compiled function in the hlo field
  4. Uses jax.block_until_ready to ensure accurate timing of asynchronous operations

The benchmark report includes additional fields for JAX:

  • first_execution_time: Time of the initial (possibly uncompiled) execution
  • compilation_time: Time to lower and compile the function
  • hlo: The StableHLO text representation of the compiled computation
report = bench.to_dicts()[0]
print(report['compilation_time'])  # e.g., 12345.67 ns
print(report['hlo'][:100])         # HLO module "jit___bench_func" ...

Installation

pip install zerobench

Export and Visualization

# Export results
bench.write_csv('results.csv')
bench.write_parquet('results.parquet')
bench.write_markdown('results.md')

# Plot results
bench.plot()
bench.write_plot('results.pdf')

Configuration

Benchmark(
    repeat=7,                    # Number of measurement repetitions
    min_duration_of_repeat=0.2,  # Minimum duration per repeat (seconds)
    time_units='ns',             # Time units: 'ns', 'us', 'ms', 's'
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zerobench-0.4.tar.gz (213.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zerobench-0.4-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file zerobench-0.4.tar.gz.

File metadata

  • Download URL: zerobench-0.4.tar.gz
  • Upload date:
  • Size: 213.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.4.tar.gz
Algorithm Hash digest
SHA256 c4660b554f1e238ba6b539ce46c8a4c9dfe2ef61bf40855cc820bf5bfe8b7f21
MD5 dad2c375ee00400ef50a85474179fcd4
BLAKE2b-256 8a1f671a60884386e9571ca9e51e5f25a49b0f1c54e56af5bb48f99d8095af23

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.4.tar.gz:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zerobench-0.4-py3-none-any.whl.

File metadata

  • Download URL: zerobench-0.4-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 167a1b98078e77bcbf3bf0f6329226753a3e10af879d9b27bbb6bed088092196
MD5 2d0bff2dbab1d5ee078c6f1274c59b46
BLAKE2b-256 f1794e0d7b0e3c7e1c412f81b9f549b7a83c1984204327f2ed99705abb952cd7

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-0.4-py3-none-any.whl:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page