PyBencher is a Python benchmarking module for benchmarking several python functions at once.
Project description
PyBencher
PyBencher is a simple, decorator-based benchmarking suite for Python. It provides detailed timing statistics (average, median, standard deviation) and supports per-test configuration overrides.
Installation
pip install pybencher
Basic Usage
from pybencher import Suite
suite = Suite()
# Quick registration
@suite.bench()
def my_function():
return sum(range(10000))
# Per-benchmark configuration
@suite.bench(name="Fast Math", max_itr=5000)
def fast():
return 1 + 1
# Function arguments via args / kwargs
@suite.bench(args=(10, 20), name="Add")
def add(a, b):
return a + b
# Manual registration (equivalent to @suite.bench)
def manual_func(n):
return sum(range(n))
suite.add(manual_func, args=(1000,), name="Manual Register")
# Run and print results
results = suite.run()
results.print()
Configuration Overrides
Any setting in the Suite can be overridden per-benchmark via keyword arguments to bench() or add(). Function inputs are separated cleanly into the args and kwargs parameters.
| Override | Type | Description |
|---|---|---|
name |
str |
Display name in reports |
timeout |
float |
Per-test time limit in seconds |
max_itr |
int |
Maximum execution count |
min_itr |
int |
Minimum execution count |
cut |
float |
Percentage of outliers to trim (0.0 to 0.5) |
disable_stdout |
bool |
Mute print() output inside the target |
verbose |
bool |
Include extra stats in results.print() |
before |
callable |
Local setup hook |
after |
callable |
Local teardown hook |
args |
tuple |
Positional arguments for the target function |
kwargs |
dict |
Keyword arguments for the target function |
Reference
Suite
timeout(float): Default time limit (10s).max_itr(int): Default max runs (1000).min_itr(int): Default min runs (3).cut(float): Default outlier threshold (0.05).warmup_itr(int): Number of unmeasured runs to perform before benchmarking (0).validate_responses(bool): Enable cross-test output consistency checks (False).validate_limit(int): Max number of iterations to store for full sequence validation (10,000).disable_stdout(bool): Global stdout suppressor.verbose(bool): Global verbosity flag.
BenchmarkResults
print(verbose=None): Print results to console.to_json(indent=4): Export results to JSON.to_list(): Export results to list of dicts.
BenchmarkResult
Dataclass containing:
name: Target name or custom override.avg,std,median,minimum,maximum: Timing stats in seconds.itr_ps: Iterations per second.iterations: Total runs.counted_iterations: Runs used for stats after trimming outliers.
Example
from pybencher import Suite
suite = Suite()
# 'timeout' here is a benchmark config override, not a function param
@suite.bench(args=(0.1,), name="Sleepy", verbose=True)
def test_sleep(duration):
import time
time.sleep(duration)
# Function kwargs stay separate from benchmark config
@suite.bench(kwargs={"timeout": 0.1}, name="Sleepy Alt", verbose=True)
def test_args(timeout):
import time
time.sleep(timeout)
results = suite.run()
results.print()
Output:
Sleepy: 100ms/itr | 10.0 itr/s
std: 120us
median: 100ms
min/max: 99ms / 101ms
runs: 10 (10 counted)
total: 1.01s
CI/CD Pipeline
PyBencher uses an automated GitHub Actions pipeline:
- Testing: Every push to any branch triggers a full test suite across Linux, Windows, and macOS for Python 3.10–3.14.
- Publishing: Pushes to
mainautomatically publish to PyPI if and only if all tests pass.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pybencher-2.2.0.tar.gz.
File metadata
- Download URL: pybencher-2.2.0.tar.gz
- Upload date:
- Size: 43.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.10 {"installer":{"name":"uv","version":"0.11.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
baf803da2231693a2b8774025bd3f8745c2a101456e38558a4326e3989d49684
|
|
| MD5 |
ee660bbf2ca7bd64fce06ea1ae7d3696
|
|
| BLAKE2b-256 |
b2baa71fe794d96844d47996a2b133c0e82d7fe383e83524c0c133ba57a155d5
|
File details
Details for the file pybencher-2.2.0-py3-none-any.whl.
File metadata
- Download URL: pybencher-2.2.0-py3-none-any.whl
- Upload date:
- Size: 8.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.10 {"installer":{"name":"uv","version":"0.11.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9a7c42006a6da9e1782c4358fbbc3f5663a067c12b14dbea860c7caee643d958
|
|
| MD5 |
b8ef4acc3aab0ae71e9c0e49735ca423
|
|
| BLAKE2b-256 |
7760dd63c604c7936d6b9aca61da053335d424fbc9a8ee8a0de22dc502254c61
|