Skip to main content

A multidimensional benchmarking library with minimal overhead

Project description

zerobench

Zero-overhead Python Benchmarking

PyPI version Python versions Continuous integration Coverage

zerobench is a Python benchmarking library with zero overhead, designed for multidimensional performance analysis.

Features

  • Context manager API: Benchmark any code block with with bench(...): ...
  • Multidimensional: Tag benchmarks with arbitrary keyword arguments
  • Zero overhead: Code is passed directly to timeit.Timer, no wrapper function
  • Auto-scaling: Automatically determines the number of iterations for reliable measurements
  • Multiple exports: CSV, Parquet, Markdown
  • Plotting: Built-in visualization with matplotlib

Quick Example

from zerobench import Benchmark

bench = Benchmark()

for n in [100, 1000, 10000]:
    data = list(range(n))
    with bench(method='sum', n=n):
        sum(data)
    with bench(method='len', n=n):
        len(data)

Output:

method=sum, n=100: 575.124 ns ± 3.35% (median of 7 runs, 500000 loops each)
method=len, n=100: 19.037 ns ± 0.85% (median of 7 runs, 20000000 loops each)
method=sum, n=1000: 2.961 µs ± 36.70% (median of 7 runs, 50000 loops each)
method=len, n=1000: 19.844 ns ± 38.63% (median of 7 runs, 10000000 loops each)
method=sum, n=10000: 50.208 µs ± 9.89% (median of 7 runs, 5000 loops each)
method=len, n=10000: 28.686 ns ± 1.22% (median of 7 runs, 20000000 loops each)
print(bench)
┌────────┬────────┬────────────────────────────┬───────────┐
│ method ┆ n      ┆ median_execution_time (ns) ┆ ± (%)     │
╞════════╪════════╪════════════════════════════╪═══════════╡
│ sum    ┆ 100    ┆ 575.124442                 ┆ 3.353129  │
│ len    ┆ 100    ┆ 19.036998                  ┆ 0.854601  │
│ sum    ┆ 1_000  ┆ 2_961.25732                ┆ 36.698258 │
│ len    ┆ 1_000  ┆ 19.844193                  ┆ 38.63371  │
│ sum    ┆ 10_000 ┆ 50_207.584997              ┆ 9.894165  │
│ len    ┆ 10_000 ┆ 28.686439                  ┆ 1.22376   │
└────────┴────────┴────────────────────────────┴───────────┘

JAX Support

ZeroBench automatically detects JAX arrays and optimizes benchmarking accordingly:

import jax.numpy as jnp
from zerobench import Benchmark

bench = Benchmark()
x = jnp.ones(1000)
y = jnp.ones(1000)

with bench(method='add'):
    x + y

When JAX code is detected, zerobench:

  1. Wraps the code in a JIT-compiled function to measure optimized execution
  2. Separates compilation from execution by reporting compilation_time separately
  3. Captures the StableHLO representation of the compiled function in the hlo field
  4. Uses block_until_ready to ensure accurate timing of asynchronous operations

The benchmark report includes additional fields for JAX:

  • first_execution_time: Time of the initial (possibly uncompiled) execution
  • compilation_time: Time to lower and compile the function
  • hlo: The StableHLO text representation of the compiled computation
report = bench.to_dicts()[0]
print(report['compilation_time'])  # e.g., 12345.67 ns
print(report['hlo'][:100])         # HLO module "jit___bench_func" ...

Installation

pip install zerobench

Export and Visualization

# Export results
bench.write_csv('results.csv')
bench.write_parquet('results.parquet')
bench.write_markdown('results.md')

# Plot results
bench.plot()
bench.write_plot('results.pdf')

Configuration

Benchmark(
    repeat=7,                    # Number of measurement repetitions
    min_duration_of_repeat=0.2,  # Minimum duration per repeat (seconds)
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zerobench-1.0.0b1.tar.gz (220.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zerobench-1.0.0b1-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file zerobench-1.0.0b1.tar.gz.

File metadata

  • Download URL: zerobench-1.0.0b1.tar.gz
  • Upload date:
  • Size: 220.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-1.0.0b1.tar.gz
Algorithm Hash digest
SHA256 9ca4ed6fea6e9e256948d27486aaabdfa6211e80494e4ebe72344a8f65f68235
MD5 95bf008c20219a238ca35c3bb843b4db
BLAKE2b-256 a9e7ec61aae303bcdd9f80a73077ea9c8b62b38907e24b4f1e9c303d8b153d6c

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-1.0.0b1.tar.gz:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zerobench-1.0.0b1-py3-none-any.whl.

File metadata

  • Download URL: zerobench-1.0.0b1-py3-none-any.whl
  • Upload date:
  • Size: 15.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zerobench-1.0.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 91f3913848677b3a56457353f07cc82d9174ac23a9d06bf4788d70e96baa73b6
MD5 c12488f8ef3934d1f7528fbcf025d473
BLAKE2b-256 2a19e0b78d90936408cb3dc5ea62ed73629b9667945f19cec7719a1519ad18ce

See more details on using hashes here.

Provenance

The following attestation bundles were made for zerobench-1.0.0b1-py3-none-any.whl:

Publisher: release.yml on pchanial/zerobench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page