Reusable timing, throughput, and memory profiling utilities
Project description
blkarbs-profiling
Find out how long your code takes and how much memory it uses. That's it.
Install
uv add blkarbs-profiling
Or with pip:
pip install blkarbs-profiling
Quick start
Time a block of code
from blkarbs_profiling import ComponentTimer
with ComponentTimer() as timer:
# your slow code here
data = load_big_dataset()
print(f"Took {timer.elapsed:.2f} seconds")
print(f"Used {timer.memory_delta:.2f} GB of memory")
Track multiple steps and get a summary table
from blkarbs_profiling import ProfilingSession, ComponentTimer
session = ProfilingSession()
# Step 1: Load data
with ComponentTimer() as timer:
data = load_data()
session.record("Load Data", timer.elapsed, count=len(data),
memory_delta=timer.memory_delta, peak_memory=timer.peak_memory)
# Step 2: Train model
with ComponentTimer() as timer:
model = train(data)
session.record("Train Model", timer.elapsed, count=1,
memory_delta=timer.memory_delta, peak_memory=timer.peak_memory)
# Print a nice table
session.print_summary("My Pipeline")
Output:
========================================================================================================================
My Pipeline
========================================================================================================================
Component Time Count Throughput Per-Item Mem Δ Peak
------------------------------------------------------------------------------------------------------------------------
Load Data 2.50s 50 20.0 items/s 50.0ms 0.30G 4.20G
Train Model 15.00s 1 0.1 items/s 15000.0ms 1.20G 5.40G
========================================================================================================================
TOTAL 17.50s
========================================================================================================================
Use the shortcut context manager
If you don't want to manually call session.record() every time:
from blkarbs_profiling import ProfilingSession, profile_operation
session = ProfilingSession()
with profile_operation("Load Data", session, count=50):
data = load_data()
with profile_operation("Train Model", session, count=1):
model = train(data)
session.print_summary("My Pipeline")
Pass session=None and the code still runs — it just doesn't record anything. No need for if/else at your call sites.
Time things inside a loop (fast)
ComponentTimer checks memory on every call, which is slow (~0.5ms). For tight loops, use AccumulatingTimer — it only uses time.perf_counter() (~150 nanoseconds).
from blkarbs_profiling import AccumulatingTimer, ProfilingSession
session = ProfilingSession()
timer = AccumulatingTimer("Process Bars")
for bar in price_bars:
with timer:
strategy.on_bar(bar)
# Dump the total into the session
timer.flush(session)
session.print_summary("Backtest")
Check progress mid-pipeline
For long jobs, you can log a quick checkpoint without waiting for the final table:
session.record("Step 1", elapsed=30.0, count=500, peak_memory=4.2, memory_delta=1.3)
session.log_checkpoint("After Step 1")
# Output:
# [CHECKPOINT: After Step 1] Total elapsed: 30.00s
# Step 1: 30.00s, 500 items (16.7/s), peak=4.20GB, Δ=+1.30GB
Save results to a file
from pathlib import Path
session.flush_to_file(Path("profiling_results.json"))
Writes a JSON file with all the metrics. Useful for comparing runs or feeding into dashboards.
What's in the box
| Class / Function | What it does |
|---|---|
ComponentTimer |
Times a block of code. Tracks memory too (via psutil). |
AccumulatingTimer |
Ultra-fast timer for tight loops. No memory tracking. |
ProfilingSession |
Collects timing from multiple steps. Thread-safe. |
profile_operation() |
Shortcut: wraps ComponentTimer + auto-records to a session. |
Dev setup
uv sync --dev
uv run pytest
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file blkarbs_profiling-0.1.0.tar.gz.
File metadata
- Download URL: blkarbs_profiling-0.1.0.tar.gz
- Upload date:
- Size: 33.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b09fd0309e26deb0bbc0ff4f274b3d4c28168101f0017d2caa0504c7e09d8f21
|
|
| MD5 |
4b9ce52b81803ba10f80a4be684f8906
|
|
| BLAKE2b-256 |
7794d7b9868de572d44be5dde6857f7e79edf49eeec0392b8137eef2128d5262
|
File details
Details for the file blkarbs_profiling-0.1.0-py3-none-any.whl.
File metadata
- Download URL: blkarbs_profiling-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e47b47da0d8950ea853d615e911ba7ee4475bb39e82235c8ed30d79cec2b886
|
|
| MD5 |
62822b27c71d97b34c48c51e6e92cabd
|
|
| BLAKE2b-256 |
48ae05ca0588612fc5ea8c9fdb2d4b5a3475f28568cf7aa77c94debdbc88bc41
|