Skip to main content

Reusable timing, throughput, and memory profiling utilities

Project description

blkarbs-profiling

Find out how long your code takes and how much memory it uses. That's it.

Install

uv add blkarbs-profiling

Or with pip:

pip install blkarbs-profiling

Quick start

Time a block of code

from blkarbs_profiling import ComponentTimer

with ComponentTimer() as timer:
    # your slow code here
    data = load_big_dataset()

print(f"Took {timer.elapsed:.2f} seconds")
print(f"Used {timer.memory_delta:.2f} GB of memory")

Track multiple steps and get a summary table

from blkarbs_profiling import ProfilingSession, ComponentTimer

session = ProfilingSession()

# Step 1: Load data
with ComponentTimer() as timer:
    data = load_data()
session.record("Load Data", timer.elapsed, count=len(data),
               memory_delta=timer.memory_delta, peak_memory=timer.peak_memory)

# Step 2: Train model
with ComponentTimer() as timer:
    model = train(data)
session.record("Train Model", timer.elapsed, count=1,
               memory_delta=timer.memory_delta, peak_memory=timer.peak_memory)

# Print a nice table
session.print_summary("My Pipeline")

Output:

========================================================================================================================
                                                    My Pipeline
========================================================================================================================
Component                                      Time      Count      Throughput   Per-Item      Mem Δ       Peak
------------------------------------------------------------------------------------------------------------------------
Load Data                                      2.50s         50    20.0 items/s    50.0ms      0.30G      4.20G
Train Model                                   15.00s          1     0.1 items/s 15000.0ms      1.20G      5.40G
========================================================================================================================
                  TOTAL                        17.50s
========================================================================================================================

Use the shortcut context manager

If you don't want to manually call session.record() every time:

from blkarbs_profiling import ProfilingSession, profile_operation

session = ProfilingSession()

with profile_operation("Load Data", session, count=50):
    data = load_data()

with profile_operation("Train Model", session, count=1):
    model = train(data)

session.print_summary("My Pipeline")

Pass session=None and the code still runs — it just doesn't record anything. No need for if/else at your call sites.

Time things inside a loop (fast)

ComponentTimer checks memory on every call, which is slow (~0.5ms). For tight loops, use AccumulatingTimer — it only uses time.perf_counter() (~150 nanoseconds).

from blkarbs_profiling import AccumulatingTimer, ProfilingSession

session = ProfilingSession()

timer = AccumulatingTimer("Process Bars")
for bar in price_bars:
    with timer:
        strategy.on_bar(bar)

# Dump the total into the session
timer.flush(session)

session.print_summary("Backtest")

Check progress mid-pipeline

For long jobs, you can log a quick checkpoint without waiting for the final table:

session.record("Step 1", elapsed=30.0, count=500, peak_memory=4.2, memory_delta=1.3)

session.log_checkpoint("After Step 1")
# Output:
# [CHECKPOINT: After Step 1] Total elapsed: 30.00s
#   Step 1: 30.00s, 500 items (16.7/s), peak=4.20GB, Δ=+1.30GB

Save results to a file

from pathlib import Path

session.flush_to_file(Path("profiling_results.json"))

Writes a JSON file with all the metrics. Useful for comparing runs or feeding into dashboards.

What's in the box

Class / Function What it does
ComponentTimer Times a block of code. Tracks memory too (via psutil).
AccumulatingTimer Ultra-fast timer for tight loops. No memory tracking.
ProfilingSession Collects timing from multiple steps. Thread-safe.
profile_operation() Shortcut: wraps ComponentTimer + auto-records to a session.

Dev setup

uv sync --dev
uv run pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

blkarbs_profiling-0.1.1.tar.gz (34.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

blkarbs_profiling-0.1.1-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file blkarbs_profiling-0.1.1.tar.gz.

File metadata

  • Download URL: blkarbs_profiling-0.1.1.tar.gz
  • Upload date:
  • Size: 34.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for blkarbs_profiling-0.1.1.tar.gz
Algorithm Hash digest
SHA256 27d4a43ba21bd61f8a200990fb869ebafba3ac80d3cc2b94e8394444f4aa8b1e
MD5 312af179f8b898f38d6693f56577d957
BLAKE2b-256 ae50848b3f13884bb77a0e1bbada16c8eca019e54dfcf28b8651b0a2b16f2c0a

See more details on using hashes here.

File details

Details for the file blkarbs_profiling-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for blkarbs_profiling-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b4a908ed1294270a9d086122521075c5b699794dab6dfb7c6141e15213d28b2b
MD5 9d440b496a911ffe6b37314c1731d9ca
BLAKE2b-256 d8baad017307212f8c3d321cc41190a7bbab101df917380b36b2b433c7c46b1d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page