Skip to main content

A simple, even if not the most efficient, way to benchmark code for educational purposes.

Project description

benchmark-my-code

A robust, student-proof benchmarking framework designed for education. It bridges the gap between raw execution timing (like timeit) and complex profiling tools, providing a safe, statistically sound environment for comparing algorithmic approaches.

The module's objective is to support learning Python by making it easier to understand both the performance and correctness of code.

Core Workflows

benchmark-my-code supports two main usage patterns:

1. Ad-Hoc Mode (Exploration & Comparison)

Designed for rapid iteration, comparing ideas, or demonstrating a concept with zero friction. You can easily compare multiple implementations using decorators.

from benchmark_my_code import benchit, run_benchmarks

@benchit
def sum_using_string(number):
    return sum(int(digit) for digit in str(number))

@benchit
def sum_using_modulo(number):
    total = 0
    while number > 0:
        total += number % 10
        number //= 10
    return total

if __name__ == '__main__':
    # Run all registered functions against varying inputs.
    # validate=True ensures all implementations return the exact same result.
    run_benchmarks(
        variants=[(123,), (456789,), (1234567890,)], 
        validate=True 
    )

2. Challenge Mode (Structured Learning)

Designed for university settings or self-guided learning. A "Challenge" is provided by an external learning package and includes hidden reference implementations, test cases, and staged guidance.

from benchmark_my_code import challenge
from bmc_cs101 import SortingChallenges

@challenge(SortingChallenges.sort_level_1)
def my_sort(input_list):
    return sorted(input_list)

In this mode, the framework acts as a mentor: it validates the function signature, prevents accidental cheating (e.g., modifying inputs in place), provides staged feedback, and scales absolute timeouts based on the reference implementation's performance to ensure fairness across different hardware.

Features & Pluggable Output

  • Correctness Validation: Benchmarks are useless if the code is wrong. The framework supports validating the output of functions, either against each other (consensus) or against a hidden reference.
  • Intelligent Profiling: Memory analysis is slower than time profiling. The framework uses adaptive profiling: establishing time stability first, then strategically sampling memory to sketch Big-O trends.
  • Pluggable Output: Outputs adapt to the environment: an object for programmatic use, a clean terminal table for CLI, data-frame ready structures for notebooks (matplotlib/seaborn) without forcing heavy dependencies, and JSON for automated grading systems.

For details on how the engine achieves reliable results and handles safety, see ADR.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchmark_my_code-0.1.0.tar.gz (34.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

benchmark_my_code-0.1.0-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file benchmark_my_code-0.1.0.tar.gz.

File metadata

  • Download URL: benchmark_my_code-0.1.0.tar.gz
  • Upload date:
  • Size: 34.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for benchmark_my_code-0.1.0.tar.gz
Algorithm Hash digest
SHA256 754b7c5972e1159945803eb15fa167a59ab789b44da72503b0403411b9f32d9a
MD5 2cb3b20aa9bf845be068550075eb66ab
BLAKE2b-256 1e4028f3b3c845048c22b8dd242ae8573205d4982d8b0a648bf951bfa923888a

See more details on using hashes here.

Provenance

The following attestation bundles were made for benchmark_my_code-0.1.0.tar.gz:

Publisher: publish.yml on michalporeba/benchmark-my-code

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file benchmark_my_code-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for benchmark_my_code-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c3d5e19d783246056f7c58de50f947dc3266f8e99de6f98f719c1ac17266ddcf
MD5 117acfc77472fe3f8b1722196f4e8f66
BLAKE2b-256 f9f63cdd6df37a29c83b46f64092924824e6d972a6be0627afe83f7b319376fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for benchmark_my_code-0.1.0-py3-none-any.whl:

Publisher: publish.yml on michalporeba/benchmark-my-code

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page