A lightweight Python benchmarking framework for personal projects and learning. Zero configuration, decorator-based benchmarks, automatic performance comparison, and built-in complexity analysis.
Project description
PyForge Benchmark
A lightweight, zero-dependency Python benchmarking framework for personal projects and learning.
Note: This is a personal/educational project. It is not intended to compete with established benchmarking tools like
pytest-benchmark,asv, orpyperf.
Features
- Decorator-based — add
@benchmarkor@complexity_analysisand you're done - Subprocess isolation — each measurement runs in a clean, isolated process
- Automatic Big-O detection — determines complexity class via log-log regression
- Zero dependencies — uses only the Python standard library
- Colored terminal output — aligned, grouped results with Unicode formatting
- Fully typed — PEP 561 compliant with
py.typedmarker
Installation
Available on PyPI: https://pypi.org/project/pyforge-benchmark/
pip install pyforge-benchmark
Or install from source:
git clone https://github.com/ertanturk/pyforge-benchmark.git
cd pyforge-benchmark
pip install -e .
Requires Python 3.12+
Quick Start
1. Create a benchmark file
mkdir benchmarks
Create benchmarks/my_benchmarks.py:
from pyforge_benchmark import benchmark, complexity_analysis
@benchmark
def my_function():
return sum(range(1000))
def generate_data(n: int) -> list[int]:
return list(range(n))
@complexity_analysis(generator=generate_data)
def linear_search(data: list[int]) -> None:
for item in data:
_ = item
2. Run
pyforge-benchmark run
3. See results
════════════════════════════════════════════════════════════════════════
BENCHMARK RESULTS
════════════════════════════════════════════════════════════════════════
● my_benchmarks.py
──────────────────────────────────────────────────────────────────────
12.34 μs my_function (Line 4)
iterations: 100
════════════════════════════════════════════════════════════════════════
COMPLEXITY ANALYSIS
════════════════════════════════════════════════════════════════════════
● my_benchmarks.py
──────────────────────────────────────────────────────────────────────
O(n) linear_search (Line 13)
R² = 0.984
Usage
Benchmark Decorator
# Bare decorator
@benchmark
def fast_function():
return 1 + 2
# With arguments
@benchmark(args=(10000,))
def fibonacci(n: int) -> int:
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return b
Complexity Analysis Decorator
def make_data(n: int) -> list[int]:
return list(range(n))
@complexity_analysis(generator=make_data)
def bubble_sort(data: list[int]) -> list[int]:
arr = data.copy()
for i in range(len(arr)):
for j in range(len(arr) - 1 - i):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr
Generator requirements:
- Must accept exactly one argument (input size N)
- Must be a named function (not a lambda)
- Must be defined at module level
Detected Complexity Classes
| Class | Example |
|---|---|
O(1) |
Hash table lookup |
O(log n) |
Binary search |
O(√n) |
Trial division |
O(n) |
Linear scan |
O(n log n) |
Merge sort |
O(n²) |
Bubble sort |
O(n³) |
Matrix multiplication |
O(2ⁿ) |
Subset enumeration |
CLI
# Run all benchmarks and complexity analysis
pyforge-benchmark run
# Run only benchmarks
pyforge-benchmark run -b
# Run only complexity analysis
pyforge-benchmark run -c
# Custom benchmarks directory
pyforge-benchmark run -d ./perf_tests
# Verbose output
pyforge-benchmark run -v
# List registered functions
pyforge-benchmark list
pyforge-benchmark list -t complexity
# Show version and system info
pyforge-benchmark info --detailed
Programmatic API
from pyforge_benchmark import main, run_cycle, print_report
# Run and get raw results
results = main(show_results=False)
# Access data
for entry in results["benchmarks"]:
print(f"{entry['key']}: {entry['avg_time']:.6f}s")
for entry in results["complexity"]:
print(f"{entry['key']}: {entry['big_o']['complexity']}")
How It Works
Benchmarking
- Functions decorated with
@benchmarkare registered in a singleton registry - Each function runs in an isolated
multiprocessing.Process - Iteration count adapts automatically (100 for fast, 5 for slow functions)
- GC is disabled during measurement for accuracy
- Results are communicated back via
multiprocessing.Queue
Complexity Analysis
- Functions decorated with
@complexity_analysisare tested at multiple input sizes - The generator creates test data for each N value
- Execution time is measured at N = 1000, 2500, 5000, 10000, 25000, 50000
- Log-log regression (
log(t) = k × log(n) + c) determines the growth exponent - The exponent maps directly to a Big-O complexity class
- R² indicates model fit quality (1.0 = perfect)
Project Structure
src/pyforge_benchmark/
├── __init__.py # Public API exports
├── __main__.py # python -m support
├── benchmark.py # @benchmark decorator
├── benchmark_runner.py # Subprocess benchmark execution
├── cli.py # Command-line interface
├── complexity.py # @complexity_analysis decorator
├── complexity_runner.py # Subprocess complexity measurement
├── main.py # Orchestration
├── py.typed # PEP 561 type stub marker
├── registry.py # Singleton function registry
└── reporter.py # Colored terminal output
Development
# Install in editable mode
pip install -e .
# Lint
ruff check src/
pylint src/pyforge_benchmark/
# Format
ruff format src/
Documentation
Full documentation available in docs/Documentation.md
Limitations
- Python 3.12+ only
- No async function support
- All targets and generators must be picklable
- Single-machine only (no distributed benchmarking)
- No statistical confidence intervals
- Designed for personal use, not production CI/CD
License
Author
Ertan Tunç Türk — ertantuncturk61@gmail.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyforge_benchmark-0.2.0.tar.gz.
File metadata
- Download URL: pyforge_benchmark-0.2.0.tar.gz
- Upload date:
- Size: 27.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7f6be34daf5f472956b8bd0d985c41802c4bc444738546287cbd16fa5e5fb132
|
|
| MD5 |
f58b9068a7def79ed0cddb03aaa57527
|
|
| BLAKE2b-256 |
363ff3cf223c8486a681866a122b701cefb1c99030eabae529970b8ba48f2f52
|
Provenance
The following attestation bundles were made for pyforge_benchmark-0.2.0.tar.gz:
Publisher:
publish.yml on ertanturk/pyforge-benchmark
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyforge_benchmark-0.2.0.tar.gz -
Subject digest:
7f6be34daf5f472956b8bd0d985c41802c4bc444738546287cbd16fa5e5fb132 - Sigstore transparency entry: 1042420803
- Sigstore integration time:
-
Permalink:
ertanturk/pyforge-benchmark@af3a186b9c7d3401f09143d6c2d874ac5f5c954b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ertanturk
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@af3a186b9c7d3401f09143d6c2d874ac5f5c954b -
Trigger Event:
push
-
Statement type:
File details
Details for the file pyforge_benchmark-0.2.0-py3-none-any.whl.
File metadata
- Download URL: pyforge_benchmark-0.2.0-py3-none-any.whl
- Upload date:
- Size: 25.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a0864ec72f068f85eeb342288cbcf79d28616923aca6d8934f457c37fe2c16f7
|
|
| MD5 |
a23c76aa08ba4b2444e24f28c13f91fb
|
|
| BLAKE2b-256 |
1106689f65cd5b81bcbbfd3fcb9a28c1f1f62807bdafabedc13b79fb01480843
|
Provenance
The following attestation bundles were made for pyforge_benchmark-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on ertanturk/pyforge-benchmark
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyforge_benchmark-0.2.0-py3-none-any.whl -
Subject digest:
a0864ec72f068f85eeb342288cbcf79d28616923aca6d8934f457c37fe2c16f7 - Sigstore transparency entry: 1042420882
- Sigstore integration time:
-
Permalink:
ertanturk/pyforge-benchmark@af3a186b9c7d3401f09143d6c2d874ac5f5c954b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ertanturk
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@af3a186b9c7d3401f09143d6c2d874ac5f5c954b -
Trigger Event:
push
-
Statement type: