Skip to main content

Lightweight async monitoring for LLM applications - capacity-based tracking with pluggable storage

Project description

llamonitor-async ๐Ÿฆ™๐Ÿ“Š

Python 3.10+ License PyPI Downloads Downloads/Month Downloads/Week

Lightweight async monitoring for LLM applications - capacity-based tracking with pluggable storage.

A modern alternative to Langfuse/LangSmith focusing on text/image capacity measurement (not tokens), async-first architecture, and maximum extensibility.

Design Philosophy: "Leave Space for Air Conditioning"

Every component has clear extension points for future enhancements. Whether you need custom metric collectors, new storage backends, or specialized aggregation strategies, the architecture supports growth without breaking existing code.

Features

  • Async-First: Non-blocking metric collection with buffered batch writes
  • Hierarchical Tracking: Automatic parent-child relationships across nested operations
  • Flexible Metrics: Measure text (characters, words, bytes) and images (count, pixels, file size)
  • Pluggable Storage: Local Parquet, PostgreSQL, MySQL (easily add more)
  • Simple API: Single decorator for most use cases
  • Production-Ready: Error handling, retries, graceful shutdown
  • Extensible: Custom collectors, backends, and aggregation strategies

Quick Start

Installation

# Basic installation
pip install llamonitor-async

# With storage backends
pip install llamonitor-async[parquet]    # For local Parquet files
pip install llamonitor-async[postgres]   # For PostgreSQL
pip install llamonitor-async[all]        # Everything

Basic Usage

import asyncio
from llamonitor import monitor_llm, initialize_monitoring, MonitorConfig

@monitor_llm(
    operation_name="generate_text",
    measure_text=True,  # Collect all text metrics
    custom_attributes={"model": "gpt-4"}
)
async def my_llm_function(prompt: str):
    # Your LLM call here
    return {"text": "Generated response..."}

async def main():
    # Initialize monitoring
    await initialize_monitoring(MonitorConfig.for_local_dev())

    # Use your decorated functions
    result = await my_llm_function("Hello!")

    # Events are automatically tracked and written asynchronously

if __name__ == "__main__":
    asyncio.run(main())

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Your Application                         โ”‚
โ”‚  @monitor_llm decorated functions/methods                   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                    โ”‚ (async, non-blocking)
                    โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚              Instrumentation Layer                          โ”‚
โ”‚  โ€ข MetricCollectors (text, image, custom)                   โ”‚
โ”‚  โ€ข Context Management (session/trace/span)                  โ”‚
โ”‚  โ€ข Decorator Logic                                          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                    โ”‚
                    โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚               Transport Layer                               โ”‚
โ”‚  โ€ข Async Queue (buffering)                                  โ”‚
โ”‚  โ€ข Background Worker (batching)                             โ”‚
โ”‚  โ€ข Retry Logic                                              โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                    โ”‚
                    โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚              Storage Backend                                โ”‚
โ”‚  โ€ข Parquet (local files)                                    โ”‚
โ”‚  โ€ข PostgreSQL (production)                                  โ”‚
โ”‚  โ€ข Custom backends                                          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Configuration

Environment Variables

LLMOPS_BACKEND=postgres
LLMOPS_CONNECTION_STRING=postgresql://user:pass@localhost/monitoring
LLMOPS_BATCH_SIZE=100
LLMOPS_FLUSH_INTERVAL_SECONDS=5.0

Programmatic Configuration

from llmops_monitoring import MonitorConfig
from llmops_monitoring.schema.config import StorageConfig

# Local development
config = MonitorConfig.for_local_dev()

# Production
config = MonitorConfig.for_production(
    "postgresql://user:pass@host:5432/monitoring"
)

# Custom
config = MonitorConfig(
    storage=StorageConfig(
        backend="parquet",
        output_dir="./my_data",
        batch_size=500,
        flush_interval_seconds=10.0
    ),
    max_queue_size=50000
)

await initialize_monitoring(config)

Examples

Hierarchical Tracking (Agentic Workflows)

from llmops_monitoring.instrumentation.context import monitoring_session, monitoring_trace

@monitor_llm("orchestrator", operation_type="agent_workflow")
async def run_workflow(query: str):
    # All nested calls automatically tracked
    intent = await classify_intent(query)      # Child span
    knowledge = await search_kb(intent)        # Child span
    response = await generate_response(knowledge)  # Child span
    return response

@monitor_llm("classify_intent")
async def classify_intent(query: str):
    # Automatically linked to parent
    return await llm.classify(query)

# Use with session context
with monitoring_session("user-123"):
    with monitoring_trace("conversation-1"):
        result = await run_workflow("What is the weather?")

Custom Metrics

from llmops_monitoring.instrumentation.base import MetricCollector, CollectorRegistry

class CostCollector(MetricCollector):
    def collect(self, result, args, kwargs, context):
        # Your cost calculation logic
        return {"custom_attributes": {"cost_usd": 0.002}}

    @property
    def metric_type(self) -> str:
        return "cost"

# Register
CollectorRegistry.register("cost", CostCollector)

# Use
@monitor_llm(collectors=["cost"])
async def my_function():
    ...

Visualization with Grafana

Start the monitoring stack:

docker-compose up -d

Access Grafana at http://localhost:3000 (admin/admin)

The dashboard includes:

  • Total events and volume metrics
  • Time-series charts by operation
  • Session analysis
  • Error tracking
  • Hierarchical trace viewer

Storage Backends

Parquet (Local Development)

config = MonitorConfig(
    storage=StorageConfig(
        backend="parquet",
        output_dir="./monitoring_data",
        partition_by="date"  # or "session_id"
    )
)

Files are written as ./monitoring_data/YYYY-MM-DD/events_*.parquet

PostgreSQL (Production)

config = MonitorConfig(
    storage=StorageConfig(
        backend="postgres",
        connection_string="postgresql://user:pass@host:5432/db",
        table_name="metric_events",
        pool_size=20
    )
)

Tables are created automatically with proper indexes.

Extension Points

1. Custom Metric Collectors

Implement MetricCollector to add new metric types:

class MyCollector(MetricCollector):
    def collect(self, result, args, kwargs, context):
        # Extract metrics
        return {"custom_attributes": {...}}

    @property
    def metric_type(self) -> str:
        return "my_metric"

2. Custom Storage Backends

Implement StorageBackend for new storage systems:

class RedisBackend(StorageBackend):
    async def initialize(self): ...
    async def write_event(self, event): ...
    async def write_batch(self, events): ...
    async def close(self): ...

3. Custom Transport Mechanisms

Replace the async queue with Kafka, Redis, etc. by modifying MonitoringWriter.

Performance

  • Overhead: < 1% for typical workloads
  • Async writes: No blocking of application code
  • Batching: Configurable batch sizes for efficiency
  • Buffering: Handles bursts without data loss
  • Graceful shutdown: Flushes all pending events

Download Statistics

llamonitor-async includes comprehensive download tracking:

  • Real-time badges showing current download counts (see badges above)
  • Automated collection via GitHub Actions (daily)
  • Manual analysis tools with Python scripts

See DOWNLOAD_TRACKING.md for full documentation.

Quick stats check:

pip install pypistats pandas
python scripts/fetch_download_stats.py

Development

# Clone repository
git clone https://github.com/yourusername/llmops-monitoring
cd llmops-monitoring

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run examples
python llmops_monitoring/examples/01_simple_example.py
python llmops_monitoring/examples/02_agentic_workflow.py
python llmops_monitoring/examples/03_custom_collector.py

# Start monitoring stack
docker-compose up -d

Roadmap

  • MySQL backend implementation
  • ClickHouse backend for analytics
  • GraphQL backend support
  • Real-time streaming with WebSockets
  • Built-in cost calculation with pricing data
  • ML-based anomaly detection
  • Aggregation server with REST API
  • Prometheus exporter
  • Datadog integration

Contributing

Contributions are welcome! Areas of focus:

  1. Storage Backends: MySQL, ClickHouse, MongoDB, S3, etc.
  2. Collectors: Cost tracking, latency patterns, cache hit rates
  3. Visualization: New Grafana dashboards, custom analytics
  4. Documentation: Tutorials, use cases, best practices

See CONTRIBUTING.md for guidelines.

License

Apache License 2.0 - see LICENSE for details.

Acknowledgments

This project synthesizes ideas from:

  • OpenTelemetry distributed tracing standards
  • Langfuse and LangSmith observability platforms
  • Academic research on LLM agent monitoring (AgentOps, LumiMAS)
  • Production lessons from the LLM community

Citation

If you use this in research, please cite:

@software{llamonitor_async,
  title = {llamonitor-async: Lightweight Async Monitoring for LLM Applications},
  author = {Guy Bass},
  year = {2025},
  url = {https://github.com/guybass/LLMOps_monitoring_async-}
}

Built with the principle of "leaving space for air conditioning" - designed for the features you'll need tomorrow.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamonitor_async-0.1.1.tar.gz (41.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llamonitor_async-0.1.1-py3-none-any.whl (37.1 kB view details)

Uploaded Python 3

File details

Details for the file llamonitor_async-0.1.1.tar.gz.

File metadata

  • Download URL: llamonitor_async-0.1.1.tar.gz
  • Upload date:
  • Size: 41.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for llamonitor_async-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a45bde7dfbc7712f00a1d1a2b3277b98209f7e91004ef4d7ea6a1b5acbb88217
MD5 aacf9781d3038be9a025dd070dc8d54f
BLAKE2b-256 dd22d964c7790df3d43b582698f4e113a93f4b6b85c4ad542d4910285a99bd50

See more details on using hashes here.

File details

Details for the file llamonitor_async-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llamonitor_async-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1c2b28da659e4f780064ceec640fb09f5af189312e26c420e188483cdbb5f772
MD5 6466c41bb1bc6d8747f264f7bdc99467
BLAKE2b-256 082b9d74fcd9ff0a550a4fed834d1ce280af326b8f53c7c48fe9434f50d6563e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page