Skip to main content

Non-intrusive monitoring for Python asyncio. Detects, pinpoints, and logs blocking IO and CPU calls that freeze your event loop.

Project description

AioCop Logo

Non-intrusive monitoring for Python asyncio.
Detects, pinpoints, and logs blocking IO and CPU calls that freeze your event loop.

PyPI version Python versions License Documentation

Features

  • Production-Safe & Low Overhead: Leverages Python's sys.audit hooks for minimal runtime overhead, making it safe for production use
  • Blocking I/O Detection: Automatically detects blocking I/O calls (file operations, network calls, subprocess, etc.) in your async code
  • Stack Trace Capture: Captures full stack traces to pinpoint exactly where blocking calls originate
  • Severity Scoring: Assigns severity scores to blocking events to help prioritize fixes
  • Callback-based Events: Register callbacks to handle slow task events however you need (logging, metrics, alerts)
  • Dynamic Controls: Enable/disable monitoring at runtime, useful for gradual rollout or debugging sessions
  • Exception Raising: Optionally raise exceptions on high-severity blocking I/O for strict enforcement during development

Why aiocop?

aiocop was built to solve specific production constraints that existing approaches didn't quite fit.

vs. Heavy Monkey-Patching (e.g., blockbuster): Many excellent tools rely on extensive monkey-patching of standard library logic to detect blocking calls. While effective, this approach can sometimes conflict with other libraries that instrument code (like APMs). aiocop prioritizes native sys.audit hooks, using minimal wrappers only where necessary to emit audit events. This significantly reduces the risk of conflicts with other instrumentation tools.

vs. asyncio Debug Mode: Python's built-in debug mode is invaluable during development. However, it can be heavy on logs and performance, making it impractical to leave on in high-traffic production environments. aiocop is designed to be "always-on" safe.

Feature Heavy Monkey-Patching Tools asyncio Debug Mode aiocop
Detection Method Extensive Wrappers Event Loop Instrumentation sys.audit Hooks + Minimal Wrappers
Interference Risk Medium (can conflict with APMs) None None
Production Overhead Low-Medium High Very Low (~13μs/task)
Stack Traces Yes No (timing only) Yes
Runtime Control Varies Flag at startup Dynamic on/off

Performance

aiocop adds approximately 13 microseconds of overhead per async task:

Scenario Overhead Impact on 50ms Request
Pure async (no blocking I/O) ~1 us 0.002%
Light blocking (os.stat) ~14 us 0.03%
Moderate blocking (file read) ~12 us 0.02%
Realistic HTTP handler ~22 us 0.04%

For typical web applications, this means less than 0.05% overhead.

Run the benchmark yourself: python benchmarks/run_benchmark.py

Installation

pip install aiocop

Quick Start

import aiocop

# Define a callback to handle slow task events
def on_slow_task(event: aiocop.SlowTaskEvent) -> None:
    if event.exceeded_threshold:
        print(f"SLOW TASK DETECTED!")
        print(f"  Elapsed: {event.elapsed_ms:.2f}ms (threshold: {event.threshold_ms}ms)")
        print(f"  Severity: {event.severity_level} (score: {event.severity_score})")
        print(f"  Reason: {event.reason}")
        for evt in event.blocking_events:
            print(f"    - {evt['event']}")
            print(f"      at {evt['trace']}")

# 1. Patch stdlib functions to emit audit events
aiocop.patch_audit_functions()

# 2. Register the audit hook to capture blocking IO
aiocop.start_blocking_io_detection(trace_depth=20)

# 3. Patch the event loop to detect slow tasks
aiocop.detect_slow_tasks(
    threshold_ms=30,
    on_slow_task=on_slow_task,
)

# 4. Activate monitoring when your app is ready
aiocop.activate()

Usage with ASGI (FastAPI, Starlette, etc.)

# In your ASGI application setup (e.g., main.py or asgi.py)
from contextlib import asynccontextmanager

import aiocop

def setup_monitoring() -> None:
    aiocop.patch_audit_functions()
    aiocop.start_blocking_io_detection(trace_depth=20)
    aiocop.detect_slow_tasks(threshold_ms=30, on_slow_task=log_to_monitoring)

def log_to_monitoring(event: aiocop.SlowTaskEvent) -> None:
    # Send to your monitoring system (Datadog, Prometheus, etc.)
    if event.exceeded_threshold:
        metrics.increment("async.slow_task", tags={
            "severity": event.severity_level,
            "reason": event.reason,
        })
        metrics.gauge("async.slow_task.elapsed_ms", event.elapsed_ms)

# Call setup early in your application lifecycle
setup_monitoring()

# Activate after startup (e.g., in a lifespan handler)
@asynccontextmanager
async def lifespan(app):
    aiocop.activate()  # Start monitoring after startup
    yield
    aiocop.deactivate()

Dynamic Controls

Enable/Disable Monitoring at Runtime

# Pause monitoring
aiocop.deactivate()

# Resume monitoring
aiocop.activate()

# Check if monitoring is active
if aiocop.is_monitoring_active():
    print("Monitoring is running")

Raise Exceptions on High Severity Blocking I/O

Useful during development and testing to catch blocking calls immediately:

# Enable globally for current context
aiocop.enable_raise_on_violations()

# Disable
aiocop.disable_raise_on_violations()

# Or use as a context manager
with aiocop.raise_on_violations():
    await some_operation()  # Will raise HighSeverityBlockingIoException if blocking

CI/CD Integration - Fail Tests on Blocking I/O

Use aiocop in your integration tests to prevent blocking code from being merged:

# conftest.py
import pytest
import aiocop

@pytest.fixture(scope="session", autouse=True)
def setup_aiocop():
    aiocop.patch_audit_functions()
    aiocop.start_blocking_io_detection()
    aiocop.detect_slow_tasks(threshold_ms=50)
    aiocop.activate()

# test_views.py
@pytest.mark.asyncio
async def test_my_async_endpoint(client):
    # Setup code can have blocking I/O (fixtures, test data, etc.)
    
    # Only the view execution is wrapped - this is what we care about
    with aiocop.raise_on_violations():
        response = await client.get("/api/endpoint")
    
    # Assertions can have blocking I/O too (DB checks, etc.)
    assert response.status_code == 200

We wrap only the async view (not the entire test) because test setup/teardown often has legitimate blocking code. See Integrations for complete examples.

Context Providers

Context providers allow you to capture external context (like tracing spans, request IDs, etc.) that will be passed to your callbacks. The context is captured within the asyncio task's context, ensuring proper propagation of contextvars.

Basic Usage

from typing import Any

def my_context_provider() -> dict[str, Any]:
    return {
        "request_id": get_current_request_id(),
        "user_id": get_current_user_id(),
    }

aiocop.register_context_provider(my_context_provider)

def on_slow_task(event: aiocop.SlowTaskEvent) -> None:
    request_id = event.context.get("request_id")
    print(f"Slow task in request {request_id}: {event.elapsed_ms}ms")

Integration with Datadog

from ddtrace import tracer
from typing import Any

def datadog_context_provider() -> dict[str, Any]:
    return {"datadog_span": tracer.current_span()}

aiocop.register_context_provider(datadog_context_provider)

def log_to_datadog(event: aiocop.SlowTaskEvent) -> None:
    if event.exceeded_threshold is False:
        return

    span = event.context.get("datadog_span")
    if span is None:
        return

    span.set_tag("slow_task.detected", True)
    span.set_metric("slow_task.elapsed_ms", event.elapsed_ms)
    span.set_metric("slow_task.severity_score", event.severity_score)
    span.set_tag("slow_task.severity_level", event.severity_level)
    span.set_tag("slow_task.reason", event.reason)

aiocop.detect_slow_tasks(threshold_ms=30, on_slow_task=log_to_datadog)

Why Context Providers?

When aiocop detects a slow task, the callback is invoked after the task completes. By that time, the original context (like the active tracing span) might no longer be accessible via standard context lookups.

Context providers solve this by capturing the context at the start of each task execution, within the task's own contextvars context. This ensures that:

  1. Tracing spans are captured before they're closed
  2. Request-scoped data is available to callbacks
  3. Any contextvar-based state is properly preserved

Managing Context Providers

# Register a provider
aiocop.register_context_provider(my_provider)

# Unregister a specific provider
aiocop.unregister_context_provider(my_provider)

# Clear all providers
aiocop.clear_context_providers()

Context providers are completely optional. If none are registered, event.context will simply be an empty dict.

Event Types

SlowTaskEvent

Emitted when either:

  • Blocking I/O is detected (reason="io_blocking") - regardless of whether the task exceeded the threshold
  • Task exceeds threshold but no blocking I/O detected (reason="cpu_blocking") - indicates CPU-bound blocking
@dataclass(frozen=True)
class SlowTaskEvent:
    elapsed_ms: float        # How long the task took
    threshold_ms: float      # Configured threshold
    exceeded_threshold: bool # True if elapsed > threshold
    severity_score: int      # Aggregate severity (sum of event weights), 0 for cpu_blocking
    severity_level: str      # "low", "medium", or "high"
    reason: str              # "io_blocking" or "cpu_blocking"
    blocking_events: list[BlockingEventInfo]  # List of detected events (empty for cpu_blocking)
    context: dict[str, Any]  # Custom context from context providers (default: {})

BlockingEventInfo

Information about each blocking event:

class BlockingEventInfo(TypedDict):
    event: str        # e.g., "open(/path/to/file)"
    trace: str        # Stack trace
    entry_point: str  # First frame in the trace
    severity: int     # Weight of this event

Severity Weights

Events are classified by severity:

Weight Value Examples
WEIGHT_HEAVY 50 socket.connect, subprocess.Popen, time.sleep, DNS lookups
WEIGHT_MODERATE 10 open(), file mutations, os.listdir
WEIGHT_LIGHT 1 os.stat, fcntl.flock, os.kill
WEIGHT_TRIVIAL 0 os.getcwd, os.path.abspath

Severity levels are determined by aggregate score:

  • high: score >= 50
  • medium: score >= 10
  • low: score < 10

API Reference

Setup Functions

  • patch_audit_functions() - Patches stdlib functions to emit audit events
  • start_blocking_io_detection(trace_depth=20) - Registers the audit hook
  • detect_slow_tasks(threshold_ms=30, on_slow_task=None) - Patches the event loop
  • activate() / deactivate() - Control monitoring at runtime

Callback Management

  • register_slow_task_callback(callback) - Add a callback
  • unregister_slow_task_callback(callback) - Remove a callback
  • clear_slow_task_callbacks() - Remove all callbacks

Context Provider Management

  • register_context_provider(provider) - Add a context provider
  • unregister_context_provider(provider) - Remove a context provider
  • clear_context_providers() - Remove all context providers

Raise-on-Violations Controls

  • enable_raise_on_violations() - Enable for current context
  • disable_raise_on_violations() - Disable for current context
  • is_raise_on_violations_enabled() - Check current state
  • raise_on_violations() - Context manager

Utility Functions

  • calculate_io_severity_score(events) - Calculate severity from events
  • get_severity_level_from_score(score) - Get "low"/"medium"/"high"
  • format_blocking_event(raw_event) - Format a raw event
  • get_blocking_events_dict() - Get all monitored events with weights
  • get_patched_functions() - Get list of patched functions

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aiocop-1.0.0.tar.gz (382.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aiocop-1.0.0-py3-none-any.whl (19.7 kB view details)

Uploaded Python 3

File details

Details for the file aiocop-1.0.0.tar.gz.

File metadata

  • Download URL: aiocop-1.0.0.tar.gz
  • Upload date:
  • Size: 382.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for aiocop-1.0.0.tar.gz
Algorithm Hash digest
SHA256 b0d5ebe65d3098c8696a13eb7b8027b4ecb7a77e9613c556f8f60a91b56c5262
MD5 959e2b86d85b726a06ac4b68397409de
BLAKE2b-256 dd28bcbc2fb33d50a8a02558d0d84ebe09ca9a641cf0fa9eec763e19f6d28731

See more details on using hashes here.

File details

Details for the file aiocop-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: aiocop-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 19.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for aiocop-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4dc0989c2e69bedf543cd72480b7da785a51872a87bfff090be201f7b3d4d0bf
MD5 57c252f0b0990c1ee6f64cf02707fda2
BLAKE2b-256 77db540928e9179513cf9482aaecc531a3752ff4609ff4eee2a756e585e3cee1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page