Skip to main content

A lightweight Python benchmarking framework for personal projects and learning. Zero configuration, decorator-based benchmarks, automatic performance comparison, and built-in complexity analysis.

Project description

PyForge Benchmark

A lightweight, zero-dependency Python benchmarking framework for personal projects and learning.

Note: This is a personal/educational project. It is not intended to compete with established benchmarking tools like pytest-benchmark, asv, or pyperf.

Python 3.12+ License: MIT Typing: Typed Code style: Ruff


Features

  • Decorator-based — add @benchmark or @complexity_analysis and you're done
  • Subprocess isolation — each measurement runs in a clean, isolated process
  • Automatic Big-O detection — determines complexity class via log-log regression
  • Zero dependencies — uses only the Python standard library
  • Colored terminal output — aligned, grouped results with Unicode formatting
  • Fully typed — PEP 561 compliant with py.typed marker

Installation

pip install pyforge-benchmark

Or from source:

git clone https://github.com/ertanturk/pyforge-benchmark.git
cd pyforge-benchmark
pip install -e .

Requires Python 3.12+

Quick Start

1. Create a benchmark file

mkdir benchmarks

Create benchmarks/my_benchmarks.py:

from pyforge_benchmark import benchmark, complexity_analysis


@benchmark
def my_function():
    return sum(range(1000))


def generate_data(n: int) -> list[int]:
    return list(range(n))


@complexity_analysis(generator=generate_data)
def linear_search(data: list[int]) -> None:
    for item in data:
        _ = item

2. Run

pyforge-benchmark run

3. See results

════════════════════════════════════════════════════════════════════════
  BENCHMARK RESULTS
════════════════════════════════════════════════════════════════════════

  ● my_benchmarks.py
  ──────────────────────────────────────────────────────────────────────
    12.34 μs           my_function (Line 4)
                       iterations: 100

════════════════════════════════════════════════════════════════════════
  COMPLEXITY ANALYSIS
════════════════════════════════════════════════════════════════════════

  ● my_benchmarks.py
  ──────────────────────────────────────────────────────────────────────
    O(n)               linear_search (Line 13)
                       R² = 0.984

Usage

Benchmark Decorator

# Bare decorator
@benchmark
def fast_function():
    return 1 + 2

# With arguments
@benchmark(args=(10000,))
def fibonacci(n: int) -> int:
    a, b = 0, 1
    for _ in range(n):
        a, b = b, a + b
    return b

Complexity Analysis Decorator

def make_data(n: int) -> list[int]:
    return list(range(n))

@complexity_analysis(generator=make_data)
def bubble_sort(data: list[int]) -> list[int]:
    arr = data.copy()
    for i in range(len(arr)):
        for j in range(len(arr) - 1 - i):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
    return arr

Generator requirements:

  • Must accept exactly one argument (input size N)
  • Must be a named function (not a lambda)
  • Must be defined at module level

Detected Complexity Classes

Class Example
O(1) Hash table lookup
O(log n) Binary search
O(√n) Trial division
O(n) Linear scan
O(n log n) Merge sort
O(n²) Bubble sort
O(n³) Matrix multiplication
O(2ⁿ) Subset enumeration

CLI

# Run all benchmarks and complexity analysis
pyforge-benchmark run

# Run only benchmarks
pyforge-benchmark run -b

# Run only complexity analysis
pyforge-benchmark run -c

# Custom benchmarks directory
pyforge-benchmark run -d ./perf_tests

# Verbose output
pyforge-benchmark run -v

# List registered functions
pyforge-benchmark list
pyforge-benchmark list -t complexity

# Show version and system info
pyforge-benchmark info --detailed

Programmatic API

from pyforge_benchmark import main, run_cycle, print_report

# Run and get raw results
results = main(show_results=False)

# Access data
for entry in results["benchmarks"]:
    print(f"{entry['key']}: {entry['avg_time']:.6f}s")

for entry in results["complexity"]:
    print(f"{entry['key']}: {entry['big_o']['complexity']}")

How It Works

Benchmarking

  1. Functions decorated with @benchmark are registered in a singleton registry
  2. Each function runs in an isolated multiprocessing.Process
  3. Iteration count adapts automatically (100 for fast, 5 for slow functions)
  4. GC is disabled during measurement for accuracy
  5. Results are communicated back via multiprocessing.Queue

Complexity Analysis

  1. Functions decorated with @complexity_analysis are tested at multiple input sizes
  2. The generator creates test data for each N value
  3. Execution time is measured at N = 500, 1000, 2500, 5000, 10000, 25000
  4. Log-log regression (log(t) = k × log(n) + c) determines the growth exponent
  5. The exponent maps directly to a Big-O complexity class
  6. R² indicates model fit quality (1.0 = perfect)

Project Structure

src/pyforge_benchmark/
├── __init__.py             # Public API exports
├── __main__.py             # python -m support
├── benchmark.py            # @benchmark decorator
├── benchmark_runner.py     # Subprocess benchmark execution
├── cli.py                  # Command-line interface
├── complexity.py           # @complexity_analysis decorator
├── complexity_runner.py    # Subprocess complexity measurement
├── main.py                 # Orchestration
├── py.typed                # PEP 561 type stub marker
├── registry.py             # Singleton function registry
└── reporter.py             # Colored terminal output

Development

# Install in editable mode
pip install -e .

# Lint
ruff check src/
pylint src/pyforge_benchmark/

# Format
ruff format src/

Limitations

  • Python 3.12+ only
  • No async function support
  • All targets and generators must be picklable
  • Single-machine only (no distributed benchmarking)
  • No statistical confidence intervals
  • Designed for personal use, not production CI/CD

License

MIT

Author

Ertan Tunç Türkertantuncturk61@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyforge_benchmark-0.0.1.tar.gz (26.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyforge_benchmark-0.0.1-py3-none-any.whl (25.3 kB view details)

Uploaded Python 3

File details

Details for the file pyforge_benchmark-0.0.1.tar.gz.

File metadata

  • Download URL: pyforge_benchmark-0.0.1.tar.gz
  • Upload date:
  • Size: 26.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for pyforge_benchmark-0.0.1.tar.gz
Algorithm Hash digest
SHA256 e20373e08cfe0eded2ceaab56bb78535641754b51f26beec926be6c144d0d9db
MD5 73c78c188642fbd7f36f38dbd969487c
BLAKE2b-256 b050bbdb952285d1b93d842db712d4aacf663dfd13a8a7beeaf5953613f4d143

See more details on using hashes here.

File details

Details for the file pyforge_benchmark-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pyforge_benchmark-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 39b6c1c1e4ca642cb787d73d9cf197e01aac4cbc4951295819e254b8f6cae608
MD5 53dedcb643a305c17fb3b52213442ac4
BLAKE2b-256 53ccdd9b246523370923317ca75b7aaba60ef91c7936e049a605fef21db3bbf8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page