Skip to main content

A lightweight Python benchmarking framework for personal projects and learning. Zero configuration, decorator-based benchmarks, automatic performance comparison, and built-in complexity analysis.

Project description

PyForge Benchmark

A lightweight, zero-dependency Python benchmarking framework for personal projects and learning.

Note: This is a personal/educational project. It is not intended to compete with established benchmarking tools like pytest-benchmark, asv, or pyperf.

Python 3.12+ PyPI version License: MIT Typing: Typed Code style: Ruff


Features

  • Decorator-based — add @benchmark or @complexity_analysis and you're done
  • Subprocess isolation — each measurement runs in a clean, isolated process
  • Automatic Big-O detection — determines complexity class via log-log regression
  • Zero dependencies — uses only the Python standard library
  • Colored terminal output — aligned, grouped results with Unicode formatting
  • Fully typed — PEP 561 compliant with py.typed marker

Installation

Available on PyPI: https://pypi.org/project/pyforge-benchmark/

pip install pyforge-benchmark

Or install from source:

git clone https://github.com/ertanturk/pyforge-benchmark.git
cd pyforge-benchmark
pip install -e .

Requires Python 3.12+

Quick Start

1. Create a benchmark file

mkdir benchmarks

Create benchmarks/my_benchmarks.py:

from pyforge_benchmark import benchmark, complexity_analysis


@benchmark
def my_function():
    return sum(range(1000))


def generate_data(n: int) -> list[int]:
    return list(range(n))


@complexity_analysis(generator=generate_data)
def linear_search(data: list[int]) -> None:
    for item in data:
        _ = item

2. Run

pyforge-benchmark run

3. See results

════════════════════════════════════════════════════════════════════════
  BENCHMARK RESULTS
════════════════════════════════════════════════════════════════════════

  ● my_benchmarks.py
  ──────────────────────────────────────────────────────────────────────
    12.34 μs           my_function (Line 4)
                       iterations: 100

════════════════════════════════════════════════════════════════════════
  COMPLEXITY ANALYSIS
════════════════════════════════════════════════════════════════════════

  ● my_benchmarks.py
  ──────────────────────────────────────────────────────────────────────
    O(n)               linear_search (Line 13)
                       R² = 0.984

Usage

Benchmark Decorator

# Bare decorator
@benchmark
def fast_function():
    return 1 + 2

# With arguments
@benchmark(args=(10000,))
def fibonacci(n: int) -> int:
    a, b = 0, 1
    for _ in range(n):
        a, b = b, a + b
    return b

Complexity Analysis Decorator

def make_data(n: int) -> list[int]:
    return list(range(n))

@complexity_analysis(generator=make_data)
def bubble_sort(data: list[int]) -> list[int]:
    arr = data.copy()
    for i in range(len(arr)):
        for j in range(len(arr) - 1 - i):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
    return arr

Generator requirements:

  • Must accept exactly one argument (input size N)
  • Must be a named function (not a lambda)
  • Must be defined at module level

Detected Complexity Classes

Class Example
O(1) Hash table lookup
O(log n) Binary search
O(√n) Trial division
O(n) Linear scan
O(n log n) Merge sort
O(n²) Bubble sort
O(n³) Matrix multiplication
O(2ⁿ) Subset enumeration

CLI

# Run all benchmarks and complexity analysis
pyforge-benchmark run

# Run only benchmarks
pyforge-benchmark run -b

# Run only complexity analysis
pyforge-benchmark run -c

# Custom benchmarks directory
pyforge-benchmark run -d ./perf_tests

# Verbose output
pyforge-benchmark run -v

# List registered functions
pyforge-benchmark list
pyforge-benchmark list -t complexity

# Show version and system info
pyforge-benchmark info --detailed

Programmatic API

from pyforge_benchmark import main, run_cycle, print_report

# Run and get raw results
results = main(show_results=False)

# Access data
for entry in results["benchmarks"]:
    print(f"{entry['key']}: {entry['avg_time']:.6f}s")

for entry in results["complexity"]:
    print(f"{entry['key']}: {entry['big_o']['complexity']}")

How It Works

Benchmarking

  1. Functions decorated with @benchmark are registered in a singleton registry
  2. Each function runs in an isolated multiprocessing.Process
  3. Iteration count adapts automatically (100 for fast, 5 for slow functions)
  4. GC is disabled during measurement for accuracy
  5. Results are communicated back via multiprocessing.Queue

Complexity Analysis

  1. Functions decorated with @complexity_analysis are tested at multiple input sizes
  2. The generator creates test data for each N value
  3. Execution time is measured at N = 1000, 2500, 5000, 10000, 25000, 50000
  4. Log-log regression (log(t) = k × log(n) + c) determines the growth exponent
  5. The exponent maps directly to a Big-O complexity class
  6. R² indicates model fit quality (1.0 = perfect)

Project Structure

src/pyforge_benchmark/
├── __init__.py             # Public API exports
├── __main__.py             # python -m support
├── benchmark.py            # @benchmark decorator
├── benchmark_runner.py     # Subprocess benchmark execution
├── cli.py                  # Command-line interface
├── complexity.py           # @complexity_analysis decorator
├── complexity_runner.py    # Subprocess complexity measurement
├── main.py                 # Orchestration
├── py.typed                # PEP 561 type stub marker
├── registry.py             # Singleton function registry
└── reporter.py             # Colored terminal output

Development

# Install in editable mode
pip install -e .

# Lint
ruff check src/
pylint src/pyforge_benchmark/

# Format
ruff format src/

Documentation

Full documentation available in docs/Documentation.md

Limitations

  • Python 3.12+ only
  • No async function support
  • All targets and generators must be picklable
  • Single-machine only (no distributed benchmarking)
  • No statistical confidence intervals
  • Designed for personal use, not production CI/CD

License

MIT

Author

Ertan Tunç Türkertantuncturk61@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyforge_benchmark-0.2.0.tar.gz (27.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyforge_benchmark-0.2.0-py3-none-any.whl (25.7 kB view details)

Uploaded Python 3

File details

Details for the file pyforge_benchmark-0.2.0.tar.gz.

File metadata

  • Download URL: pyforge_benchmark-0.2.0.tar.gz
  • Upload date:
  • Size: 27.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pyforge_benchmark-0.2.0.tar.gz
Algorithm Hash digest
SHA256 7f6be34daf5f472956b8bd0d985c41802c4bc444738546287cbd16fa5e5fb132
MD5 f58b9068a7def79ed0cddb03aaa57527
BLAKE2b-256 363ff3cf223c8486a681866a122b701cefb1c99030eabae529970b8ba48f2f52

See more details on using hashes here.

Provenance

The following attestation bundles were made for pyforge_benchmark-0.2.0.tar.gz:

Publisher: publish.yml on ertanturk/pyforge-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pyforge_benchmark-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pyforge_benchmark-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a0864ec72f068f85eeb342288cbcf79d28616923aca6d8934f457c37fe2c16f7
MD5 a23c76aa08ba4b2444e24f28c13f91fb
BLAKE2b-256 1106689f65cd5b81bcbbfd3fcb9a28c1f1f62807bdafabedc13b79fb01480843

See more details on using hashes here.

Provenance

The following attestation bundles were made for pyforge_benchmark-0.2.0-py3-none-any.whl:

Publisher: publish.yml on ertanturk/pyforge-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page