Skip to main content

PyBencher is a Python benchmarking module for benchmarking several python functions at once. PyBencher supports tuneable benchmarking parameters as well as args and kwargs for function calls.

Project description

PyBencher

GitHub Release PyPI Downloads GitHub Actions Workflow Status

PyBencher is a simple, decorator-based benchmarking suite for Python. It provides detailed timing statistics (average, median, standard deviation) and supports per-test configuration overrides.

Installation

pip install pybencher

Basic Usage

from pybencher import Suite

suite = Suite()

# Quick registration
@suite.bench()
def my_function():
    return sum(range(10000))

# Custom configuration using 'bench_' prefix
@suite.bench(bench_name="Fast Math", bench_max_itr=5000)
def fast():
    return 1 + 1

# Positional and keyword arguments are passed directly
@suite.bench(10, 20, bench_name="Add")
def add(a, b):
    return a + b

# Manual registration (equivalent to @suite.bench)
def manual_func(n):
    return sum(range(n))

suite.add(manual_func, 1000, bench_name="Manual Register")

# Run and print results
results = suite.run()
results.print()

Configuration Overrides

Any setting in the Suite can be overridden for a specific benchmark by prefixing it with bench_. This ensures that benchmark configuration does not interfere with your function's own parameters (e.g., using timeout=0.5 as a function argument while setting bench_timeout=5.0 for the suite).

Override Type Description
bench_name str Display name in reports
bench_timeout float Per-test time limit in seconds
bench_max_itr int Maximum execution count
bench_min_itr int Minimum execution count
bench_cut float Percentage of outliers to trim (0.0 to 0.5)
bench_disable_stdout bool Mute print() output inside the target
bench_verbose bool Include extra stats in results.print()
bench_before callable Local setup hook
bench_after callable Local teardown hook

Reference

Suite

  • timeout (float): Default time limit (10s).
  • max_itr (int): Default max runs (1000).
  • min_itr (int): Default min runs (3).
  • cut (float): Default outlier threshold (0.05).
  • warmup_itr (int): Number of unmeasured runs to perform before benchmarking (0).
  • validate_responses (bool): Enable cross-test output consistency checks (False).
  • validate_limit (int): Max number of iterations to store for full sequence validation (10,000).
  • disable_stdout (bool): Global stdout suppressor.
  • verbose (bool): Global verbosity flag.

BenchmarkResults

  • print(verbose=None): Print results to console.
  • to_json(indent=4): Export results to JSON.
  • to_list(): Export results to list of dicts.

BenchmarkResult

Dataclass containing:

  • name: Target name or custom override.
  • avg, std, median, minimum, maximum: Timing stats in seconds.
  • itr_ps: Iterations per second.
  • iterations: Total runs.
  • counted_iterations: Runs used for stats after trimming outliers.

Example

from pybencher import Suite

suite = Suite()

# 'timeout' here is a function param, not the benchmark limit
@suite.bench(timeout=0.1, bench_name="Sleepy", bench_verbose=True)
def test_args(timeout):
    import time
    time.sleep(timeout)

results = suite.run()
results.print()

Output:

Sleepy: 100ms/itr | 10.0 itr/s
  std:     120us
  median:  100ms
  min/max: 99ms / 101ms
  runs:    10 (10 counted)
  total:   1.01s

CI/CD Pipeline

PyBencher uses an automated GitHub Actions pipeline:

  • Testing: Every push to any branch triggers a full test suite across Linux, Windows, and macOS for Python 3.10–3.14.
  • Publishing: Pushes to main automatically publish to PyPI if and only if all tests pass.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pybencher-2.1.1.tar.gz (30.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pybencher-2.1.1-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file pybencher-2.1.1.tar.gz.

File metadata

  • Download URL: pybencher-2.1.1.tar.gz
  • Upload date:
  • Size: 30.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pybencher-2.1.1.tar.gz
Algorithm Hash digest
SHA256 9afbbdf0a3f62a7529d15a979ca96027b01a9b0b1fbbe2ee0bbbae78bc537219
MD5 af33901deb2e3e43fbcbc2527df8cd88
BLAKE2b-256 cf5d5b6d9f119bee2f46387e0e4bc39a876a9767d918dfb726428fe825147534

See more details on using hashes here.

File details

Details for the file pybencher-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: pybencher-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 7.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pybencher-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fe5b5735e30fa937d848b49ace06ed08d894cf2fcf6889988cab5a7d652cd25b
MD5 b30da159e4ab73a54417c3958f275eec
BLAKE2b-256 ddc045c5df66d3d5e176f213cb781bb2bde14af81300e454336ff784c9d1361d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page