Skip to main content

PyBencher is a Python benchmarking module for benchmarking several python functions at once. PyBencher supports tuneable benchmarking parameters as well as args and kwargs for function calls.

Project description

PyBencher

GitHub Release PyPI Downloads GitHub Actions Workflow Status

PyBencher is a simple, decorator-based benchmarking suite for Python. It provides detailed timing statistics (average, median, standard deviation) and supports per-test configuration overrides.

Installation

pip install pybencher

Basic Usage

from pybencher import Suite

suite = Suite()

# Quick registration
@suite.bench()
def my_function():
    return sum(range(10000))

# Custom configuration using 'bench_' prefix
@suite.bench(bench_name="Fast Math", bench_max_itr=5000)
def fast():
    return 1 + 1

# Positional and keyword arguments are passed directly
@suite.bench(10, 20, bench_name="Add")
def add(a, b):
    return a + b

# Manual registration (equivalent to @suite.bench)
def manual_func(n):
    return sum(range(n))

suite.add(manual_func, 1000, bench_name="Manual Register")

# Run and print results
results = suite.run()
results.print()

Configuration Overrides

Any setting in the Suite can be overridden for a specific benchmark by prefixing it with bench_. This ensures that benchmark configuration does not interfere with your function's own parameters (e.g., using timeout=0.5 as a function argument while setting bench_timeout=5.0 for the suite).

Override Type Description
bench_name str Display name in reports
bench_timeout float Per-test time limit in seconds
bench_max_itr int Maximum execution count
bench_min_itr int Minimum execution count
bench_cut float Percentage of outliers to trim (0.0 to 0.5)
bench_disable_stdout bool Mute print() output inside the target
bench_verbose bool Include extra stats in results.print()
bench_before callable Local setup hook
bench_after callable Local teardown hook

Reference

Suite

  • timeout (float): Default time limit (10s).
  • max_itr (int): Default max runs (1000).
  • min_itr (int): Default min runs (3).
  • cut (float): Default outlier threshold (0.05).
  • warmup_itr (int): Number of unmeasured runs to perform before benchmarking (0).
  • validate_responses (bool): Enable cross-test output consistency checks (False).
  • validate_limit (int): Max number of iterations to store for full sequence validation (10,000).
  • disable_stdout (bool): Global stdout suppressor.
  • verbose (bool): Global verbosity flag.

BenchmarkResults

  • print(verbose=None): Print results to console.
  • to_json(indent=4): Export results to JSON.
  • to_list(): Export results to list of dicts.

BenchmarkResult

Dataclass containing:

  • name: Target name or custom override.
  • avg, std, median, minimum, maximum: Timing stats in seconds.
  • itr_ps: Iterations per second.
  • iterations: Total runs.
  • counted_iterations: Runs used for stats after trimming outliers.

Example

from pybencher import Suite

suite = Suite()

# 'timeout' here is a function param, not the benchmark limit
@suite.bench(timeout=0.1, bench_name="Sleepy", bench_verbose=True)
def test_args(timeout):
    import time
    time.sleep(timeout)

results = suite.run()
results.print()

Output:

Sleepy: 100ms/itr | 10.0 itr/s
  std:     120us
  median:  100ms
  min/max: 99ms / 101ms
  runs:    10 (10 counted)
  total:   1.01s

CI/CD Pipeline

PyBencher uses an automated GitHub Actions pipeline:

  • Testing: Every push to any branch triggers a full test suite across Linux, Windows, and macOS for Python 3.10–3.14.
  • Publishing: Pushes to main automatically publish to PyPI if and only if all tests pass.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pybencher-2.1.0.tar.gz (30.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pybencher-2.1.0-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file pybencher-2.1.0.tar.gz.

File metadata

  • Download URL: pybencher-2.1.0.tar.gz
  • Upload date:
  • Size: 30.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pybencher-2.1.0.tar.gz
Algorithm Hash digest
SHA256 c7de5fccdc4a253a71dece74cb87ab813a86f5274afdc8fc1d53b910ebbbbc81
MD5 345ecfc124fd25aeba2a7b7cb2c43800
BLAKE2b-256 5f44b8db7439b8022a424c124f2a0a1e728a1ab84e86ad4bb3468e400982d038

See more details on using hashes here.

File details

Details for the file pybencher-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: pybencher-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 7.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pybencher-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 52474b219515d472d166d0511f02e52c21adec2c2f81660792c57eb1612808b9
MD5 8d6a32859c491a88d46331bf30a03d87
BLAKE2b-256 72c371c13804badf6c651cdd7f6d5e1c1e956882dec079a9428465262d41d0a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page