Skip to main content

A lightweight python package to time python functions and blocks, and bench machine learning models

Project description

QuickBench

QuickBench Logo

The Essential Performance Toolkit for Python.

A [Minerva AI] Project.

Build Status Python Version License

A lightweight, zero-dependency Python utility for timing code execution. QuickBench provides both a decorator and a context manager which can easily measure how long functions or code blocks take to run.

QuickBench is the all-in-one performance toolkit for Python developers. Whether you are optimizing a for loop, comparing Machine Learning models, or evaluating LLM latency, QuickBench handles the metrics so you can focus on the code.

Why QuickBench?

  • Universal Timer: Benchmark functions or blocks of code with a simple decorator.
  • ML Benchmarking: Automatically compare sklearn, xgboost, etc. (Accuracy, F1, RMSE).
  • LLM Benchmarking: Measure Token/sec and Latency for GPT, Llama, or Claude wrappers.
  • pandas Output: All results return clean DataFrames ready for analysis.

Features

  • Decorator Support: Time entire functions with a single line (@monitor).
  • Context Manager: Time specific blocks of code inside a function (with monitor():).
  • Human Readable: Automatically formats output to ns, µs, ms, or s.
  • Zero Dependencies: Pure Python, no heavy libraries required.

Installation

pip install QuickBench

Tutorial

Tutorial & Usage

  1. The Universal Timer (@monitor) Stop writing start = time.time() manually. Use the decorator to time functions, or the context manager to time specific blocks.

A. Function Decorator Use this to measure how long a specific function takes to run.

from QuickBench import monitor
import time

@monitor
def heavy_processing():
    # Simulates a slow task
    time.sleep(1.5)
    return "Done"

heavy_processing()
# Output: [heavy_processing] finished in 1.50 s

B. Context Manager (Block Timer) Use this when you only want to time a few lines of code inside a larger function.

from QuickBench import monitor
import time

def data_pipeline():
    print("Loading data...")
    
    with monitor(label="Data Cleaning"):
        # Time only this specific part
        time.sleep(0.3)
    
    print("Pipeline finished.")
  1. The Auto-ML Benchmarker QuickBench automatically detects if your problem is Classification or Regression and calculates the correct metrics (Accuracy/F1 vs. MSE/R2).

Step 1: Define your models You can use any model that follows the scikit-learn API (has .predict()).

from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier

models = {
    "Logistic": LogisticRegression(),
    "Decision Tree": DecisionTreeClassifier(),
    "Random Forest": RandomForestClassifier()
}

Step 2: Run the Benchmark Pass your dictionary of models along with your test data.

from QuickBench.ai import MLBencher

# Assuming you have X_test and y_test ready
bencher = MLBencher(models, X_test, y_test)
results = bencher.run()

print(results)

Output (Returns a Pandas DataFrame):

Model Type Name Primary Score F1 Score Latency (s)
Classification Random Forest 0.9820 0.9815 0.1204
Classification Decision Tree 0.9650 0.9648 0.0052
Classification Logistic 0.9400 0.9390 0.0021

Contributing & Issues

We would absolutely love your contribution, since that is what makes us keep going. Found a bug? Want to add a feature?

Report Issues: GitHub Issues Page

Source Code: GitHub Repository

Branding

QuickBench is proudly developed and maintained by Minerva AI. Empowering developers with wisdom and speed.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quickbench-0.0.121.tar.gz (6.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quickbench-0.0.121-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file quickbench-0.0.121.tar.gz.

File metadata

  • Download URL: quickbench-0.0.121.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for quickbench-0.0.121.tar.gz
Algorithm Hash digest
SHA256 d6a48c3e0289e2fbf5b2f94c76f2e2610570879d74fdb4db0b41e23327fb4f6a
MD5 aac375020d3ade1611dfd5c4207a66c3
BLAKE2b-256 a3138afed44cf386eac9550908284c27fb9df8fdc828bf1997bdf138874a6f7f

See more details on using hashes here.

File details

Details for the file quickbench-0.0.121-py3-none-any.whl.

File metadata

  • Download URL: quickbench-0.0.121-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for quickbench-0.0.121-py3-none-any.whl
Algorithm Hash digest
SHA256 44eb1914e0739838268aa0bff924c8a2029e2275087d0af745c2c2aa24d90700
MD5 165a07a0d241a0973b0aeaeaf431d788
BLAKE2b-256 478c8ba54db7dce862c93651def656ef2d6b4d13a2d9e9cf1c5e2784da9a9497

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page