A simple, even if not the most efficient, way to benchmark code for educational purposes.
Project description
benchmark-my-code
A robust, student-proof benchmarking framework designed for education. It bridges the gap between raw execution timing (like timeit) and complex profiling tools, providing a safe, statistically sound environment for comparing algorithmic approaches.
The module's objective is to support learning Python by making it easier to understand both the performance and correctness of code.
Core Workflows
benchmark-my-code supports two main usage patterns:
1. Ad-Hoc Mode (Exploration & Comparison)
Designed for rapid iteration, comparing ideas, or demonstrating a concept with zero friction. You can easily compare multiple implementations using decorators.
from benchmark_my_code import benchit, run_benchmarks
@benchit
def sum_using_string(number):
return sum(int(digit) for digit in str(number))
@benchit
def sum_using_modulo(number):
total = 0
while number > 0:
total += number % 10
number //= 10
return total
if __name__ == '__main__':
# Run all registered functions against varying inputs.
# validate=True ensures all implementations return the exact same result.
run_benchmarks(
variants=[(123,), (456789,), (1234567890,)],
validate=True
)
2. Challenge Mode (Structured Learning)
Designed for university settings or self-guided learning. A "Challenge" is provided by an external learning package and includes hidden reference implementations, test cases, and staged guidance.
from benchmark_my_code import challenge
from bmc_cs101 import SortingChallenges
@challenge(SortingChallenges.sort_level_1)
def my_sort(input_list):
return sorted(input_list)
In this mode, the framework acts as a mentor: it validates the function signature, prevents accidental cheating (e.g., modifying inputs in place), provides staged feedback, and scales absolute timeouts based on the reference implementation's performance to ensure fairness across different hardware.
Features & Pluggable Output
- Correctness Validation: Benchmarks are useless if the code is wrong. The framework supports validating the output of functions, either against each other (consensus) or against a hidden reference.
- Intelligent Profiling: Memory analysis is slower than time profiling. The framework uses adaptive profiling: establishing time stability first, then strategically sampling memory to sketch Big-O trends.
- Pluggable Output: Outputs adapt to the environment: an object for programmatic use, a clean terminal table for CLI, data-frame ready structures for notebooks (matplotlib/seaborn) without forcing heavy dependencies, and JSON for automated grading systems.
For details on how the engine achieves reliable results and handles safety, see ADR.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file benchmark_my_code-0.1.0.tar.gz.
File metadata
- Download URL: benchmark_my_code-0.1.0.tar.gz
- Upload date:
- Size: 34.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
754b7c5972e1159945803eb15fa167a59ab789b44da72503b0403411b9f32d9a
|
|
| MD5 |
2cb3b20aa9bf845be068550075eb66ab
|
|
| BLAKE2b-256 |
1e4028f3b3c845048c22b8dd242ae8573205d4982d8b0a648bf951bfa923888a
|
Provenance
The following attestation bundles were made for benchmark_my_code-0.1.0.tar.gz:
Publisher:
publish.yml on michalporeba/benchmark-my-code
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
benchmark_my_code-0.1.0.tar.gz -
Subject digest:
754b7c5972e1159945803eb15fa167a59ab789b44da72503b0403411b9f32d9a - Sigstore transparency entry: 1200981122
- Sigstore integration time:
-
Permalink:
michalporeba/benchmark-my-code@e31e696071d16888b4d9ea25e4e9113f1e006c71 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/michalporeba
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e31e696071d16888b4d9ea25e4e9113f1e006c71 -
Trigger Event:
release
-
Statement type:
File details
Details for the file benchmark_my_code-0.1.0-py3-none-any.whl.
File metadata
- Download URL: benchmark_my_code-0.1.0-py3-none-any.whl
- Upload date:
- Size: 12.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c3d5e19d783246056f7c58de50f947dc3266f8e99de6f98f719c1ac17266ddcf
|
|
| MD5 |
117acfc77472fe3f8b1722196f4e8f66
|
|
| BLAKE2b-256 |
f9f63cdd6df37a29c83b46f64092924824e6d972a6be0627afe83f7b319376fd
|
Provenance
The following attestation bundles were made for benchmark_my_code-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on michalporeba/benchmark-my-code
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
benchmark_my_code-0.1.0-py3-none-any.whl -
Subject digest:
c3d5e19d783246056f7c58de50f947dc3266f8e99de6f98f719c1ac17266ddcf - Sigstore transparency entry: 1200981188
- Sigstore integration time:
-
Permalink:
michalporeba/benchmark-my-code@e31e696071d16888b4d9ea25e4e9113f1e006c71 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/michalporeba
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e31e696071d16888b4d9ea25e4e9113f1e006c71 -
Trigger Event:
release
-
Statement type: