Skip to main content

VCollab Profiler - nested execution time and memory profiling for heavy tasks

Project description

Profiler

Nested execution time and memory profiling for heavy tasks.

Overview

vcti-profiler provides a lightweight performance profiler for measuring execution time and memory usage of coarse-grained tasks — file I/O, format conversions, batch processing, and similar operations where millisecond-level granularity is sufficient.

Profiling events are created via a context manager (sync or async) or decorator and can be freely nested. The decorator works on both def and async def functions. Each event logs its start, completion (or failure), elapsed time, peak-memory growth during the event, and RSS delta through a standard Python logger. An optional on_event_end callback delivers a structured EventResult for forwarding to metrics backends (OpenTelemetry, Prometheus, etc.).

The event stack is stored in a contextvars.ContextVar, so nested events across concurrent asyncio tasks or threads remain isolated from one another without any caller setup.

This is not a micro-benchmarking tool. For sub-millisecond timing use timeit or cProfile.

Installation

pip install vcti-profiler

In requirements.txt

vcti-profiler>=1.0.0

In pyproject.toml dependencies

dependencies = [
    "vcti-profiler>=1.0.0",
]

Quick Start

As a context manager

from vcti.profiler import Profiler

profiler = Profiler()

with profiler.event("Loading dataset"):
    data = load_large_file()

As a decorator

from vcti.profiler import Profiler

profiler = Profiler()

@profiler.profile("Processing batch")
def process_batch(items):
    ...

Nested events

with profiler.event("Conversion"):
    with profiler.event("Read input"):
        ...
    with profiler.event("Write output"):
        ...

Async context manager

async def load():
    async with profiler.aevent("Fetching remote data"):
        await client.fetch()

Decorator on an async function

@profiler.profile("Fetch user")
async def fetch_user(uid: int) -> User:
    return await api.get(uid)

The decorator detects coroutine functions and wraps the awaited coroutine — timing spans the full await-driven lifetime, not just the call that creates the coroutine object.

Custom logger or log level

log_level accepts either an int (logging.DEBUG) or its name ("DEBUG"). Failed events always log at ERROR regardless of this setting.

import logging

profiler = Profiler(
    logger=logging.getLogger("myapp.profile"),
    log_level="DEBUG",
)

Forwarding metrics to an external system

from vcti.profiler import EventResult, Profiler

def emit(result: EventResult) -> None:
    metrics.histogram("task.duration_s", result.elapsed_seconds,
                      tags={"event": result.description})

profiler = Profiler(on_event_end=emit)

Exceptions raised by the callback are caught and logged at WARNING; they never interrupt the profiled code path.


See also

Runnable scripts that exercise the full surface:

  • examples/basic_usage.py — sync context manager, nesting, and the @profile decorator.
  • examples/async_tasks.py — concurrent asyncio tasks with nested aevent blocks and @profile on async def.
  • examples/otel_like_callback.py — a thread-safe in-memory metrics sink wired to on_event_end, modeling how an OpenTelemetry / Prometheus / StatsD adapter would be connected.
  • benchmarks/overhead.py — reproducible per-event overhead measurement across baseline, NullHandler, and formatted StreamHandler scenarios. Back the numbers in docs/design.md against your own hardware.

Platform support

Platform Elapsed time RSS delta Peak-memory delta Source
Windows psutil.memory_info().peak_wset
Linux resource.getrusage(RUSAGE_SELF)
macOS resource.getrusage(RUSAGE_SELF)
Other omitted (None)

On unsupported platforms the Peak memory added log line is silently omitted and EventResult.peak_memory_delta_bytes is None; all other metrics work identically.

Python 3.12 / 3.13 / 3.14 are exercised in CI on Ubuntu and Windows.


Dependencies

  • psutil — for memory metrics.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vcti_profiler-1.0.0.tar.gz (15.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vcti_profiler-1.0.0-py3-none-any.whl (10.1 kB view details)

Uploaded Python 3

File details

Details for the file vcti_profiler-1.0.0.tar.gz.

File metadata

  • Download URL: vcti_profiler-1.0.0.tar.gz
  • Upload date:
  • Size: 15.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for vcti_profiler-1.0.0.tar.gz
Algorithm Hash digest
SHA256 aeb918c0cb6e70445f11e124f6954734bdbd5cbceaf8b65bebf09088b08b53da
MD5 4035a5cd5d2a6126107f62c43655ea3c
BLAKE2b-256 6d65f75af4a751cacf1f2da62164bf3809ad1130660b107171cc6571965737c1

See more details on using hashes here.

Provenance

The following attestation bundles were made for vcti_profiler-1.0.0.tar.gz:

Publisher: publish.yml on vcollab/vcti-python-profiler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vcti_profiler-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: vcti_profiler-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 10.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for vcti_profiler-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f32267c3113f47852cef5cdcb9335b23b07433dab9036762ff86e7a1c56f45e3
MD5 49a14ff6b536c8b2cc77349fa9e1cf49
BLAKE2b-256 1af8a6a02890f30eab44f288ce1d9fcd6868a84064053e36f8ee30a55ae4e3ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for vcti_profiler-1.0.0-py3-none-any.whl:

Publisher: publish.yml on vcollab/vcti-python-profiler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page