A lightweight python package to time python functions and blocks, and bench machine learning models
Project description
QuickBench
The Essential Performance Toolkit for Python.
A [Minerva AI] Project.
A lightweight, zero-dependency Python utility for timing code execution. QuickBench provides both a decorator and a context manager which can easily measure how long functions or code blocks take to run.
QuickBench is the all-in-one performance toolkit for Python developers. Whether you are optimizing a for loop, comparing Machine Learning models, or evaluating LLM latency, QuickBench handles the metrics so you can focus on the code.
Why QuickBench?
- Universal Timer: Benchmark functions or blocks of code with a simple decorator.
- ML Benchmarking: Automatically compare
sklearn,xgboost, etc. (Accuracy, F1, RMSE). - LLM Benchmarking: Measure Token/sec and Latency for GPT, Llama, or Claude wrappers.
- pandas Output: All results return clean DataFrames ready for analysis.
Features
- Decorator Support: Time entire functions with a single line (
@monitor). - Context Manager: Time specific blocks of code inside a function (
with monitor():). - Human Readable: Automatically formats output to
ns,µs,ms, ors. - Zero Dependencies: Pure Python, no heavy libraries required.
Installation
pip install QuickBench
Tutorial
Tutorial & Usage
- The Universal Timer (@monitor) Stop writing start = time.time() manually. Use the decorator to time functions, or the context manager to time specific blocks.
A. Function Decorator Use this to measure how long a specific function takes to run.
from QuickBench import monitor
import time
@monitor
def heavy_processing():
# Simulates a slow task
time.sleep(1.5)
return "Done"
heavy_processing()
# Output: [heavy_processing] finished in 1.50 s
B. Context Manager (Block Timer) Use this when you only want to time a few lines of code inside a larger function.
from QuickBench import monitor
import time
def data_pipeline():
print("Loading data...")
with monitor(label="Data Cleaning"):
# Time only this specific part
time.sleep(0.3)
print("Pipeline finished.")
- The Auto-ML Benchmarker QuickBench automatically detects if your problem is Classification or Regression and calculates the correct metrics (Accuracy/F1 vs. MSE/R2).
Step 1: Define your models You can use any model that follows the scikit-learn API (has .predict()).
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
models = {
"Logistic": LogisticRegression(),
"Decision Tree": DecisionTreeClassifier(),
"Random Forest": RandomForestClassifier()
}
Step 2: Run the Benchmark Pass your dictionary of models along with your test data.
from QuickBench.ai import MLBencher
# Assuming you have X_test and y_test ready
bencher = MLBencher(models, X_test, y_test)
results = bencher.run()
print(results)
Output (Returns a Pandas DataFrame):
| Model Type | Name | Primary Score | F1 Score | Latency (s) |
|---|---|---|---|---|
| Classification | Random Forest | 0.9820 | 0.9815 | 0.1204 |
| Classification | Decision Tree | 0.9650 | 0.9648 | 0.0052 |
| Classification | Logistic | 0.9400 | 0.9390 | 0.0021 |
Contributing & Issues
We would absolutely love your contribution, since that is what makes us keep going. Found a bug? Want to add a feature?
Report Issues: GitHub Issues Page
Source Code: GitHub Repository
Branding
QuickBench is proudly developed and maintained by Minerva AI. Empowering developers with wisdom and speed.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quickbench-0.0.122.tar.gz.
File metadata
- Download URL: quickbench-0.0.122.tar.gz
- Upload date:
- Size: 6.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bce6492f8757b762b630ceda088d628e5856c17889b0e107d250d0f34c8cba63
|
|
| MD5 |
81e6dfb5544a0e7b0cbaa1d8bb74116e
|
|
| BLAKE2b-256 |
d24b84fda63cdf05543156875e310e781bac10db8cbfd5b98c63602fcf097b4c
|
File details
Details for the file quickbench-0.0.122-py3-none-any.whl.
File metadata
- Download URL: quickbench-0.0.122-py3-none-any.whl
- Upload date:
- Size: 6.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
90b629ae0c0a29c12acbebc7a87f6e51b374149066c2f51671aee437b08d448b
|
|
| MD5 |
dea781f78655427236c4f055b75bd906
|
|
| BLAKE2b-256 |
2671ee28b4b3abf5be2bab44137ab6c3ca319d8dc1a44beb4e242e84c563b2e1
|