Skip to main content

AgentOBS — reference implementation of RFC-0001 AGENTOBS, the Observability Schema Standard for Agentic AI Systems

Project description

AgentOBS

The reference implementation of the AGENTOBS Standard.
A lightweight Python SDK that gives your AI applications a common, structured way to record, sign, redact, and export events — with zero mandatory dependencies.

AGENTOBS (RFC-0001) is the open event-schema standard for observability of agentic AI systems.

Python 3.9+ PyPI AGENTOBS RFC-0001 93% test coverage 3032 tests Version 1.0.8 Zero dependencies Documentation MIT license


What is this?

AgentOBS (agentobs) is the reference implementation of RFC-0001 AGENTOBS — the open event-schema standard for observability of agentic AI systems.

AGENTOBS defines a structured, typed event envelope that every LLM-adjacent instrumentation tool can emit and every observability backend can consume. It covers the full lifecycle: event envelopes, agent span hierarchies, token and cost models, HMAC audit chains, PII redaction, OTLP-compatible export, and schema governance.

Think of AgentOBS as a universal receipt format for your AI application. Every time your app calls a language model, makes a decision, redacts private data, or checks a guardrail — this library gives that action a consistent, structured record that any tool in your stack can read.


Why use it?

Without a shared schema, every team invents their own log format. With agentobs (and the AGENTOBS standard it implements), your logs, dashboards, compliance reports, and monitoring tools all speak the same language — automatically.

Without AgentOBS With AgentOBS
Each service logs events differently Every event follows the same structure
Hard to audit who saw what data Built-in HMAC signing creates a tamper-proof audit trail
PII scattered across logs First-class PII redaction before data leaves your app
Vendor-specific observability OpenTelemetry-compatible — works with any monitoring stack
No way to check compatibility CLI + programmatic compliance checks in CI
Complex integration glue Zero required dependencies — just pip install

Install

pip install agentobs
import agentobs  # distribution name is agentobs, import name is agentobs

Requires Python 3.9 or later. No other packages are required for core usage.

Note: The PyPI distribution is named agentobs. The Python import name remains agentobs.

Optional extras

pip install "agentobs[jsonschema]"   # strict JSON Schema validation
pip install "agentobs[openai]"       # OpenAI auto-instrumentation (patch/unpatch)
pip install "agentobs[http]"         # Webhook + OTLP export
pip install "agentobs[pydantic]"     # Pydantic v2 model layer
pip install "agentobs[otel]"         # OpenTelemetry SDK integration
pip install "agentobs[kafka]"        # EventStream.from_kafka() via kafka-python
pip install "agentobs[langchain]"    # LangChain callback handler
pip install "agentobs[llamaindex]"   # LlamaIndex event handler
pip install "agentobs[crewai]"       # CrewAI callback handler
pip install "agentobs[datadog]"      # Datadog APM + metrics exporter
pip install "agentobs[all]"          # everything above

Five-minute tour

1 — Trace an LLM call with the span API

import agentobs

agentobs.configure(exporter="console", service_name="my-agent")

with agentobs.span("call-llm") as span:
    span.set_model(model="gpt-4o", system="openai")
    result = call_llm(prompt)                          # your LLM call here
    span.set_token_usage(input=512, output=128, total=640)
    span.set_status("ok")

The context manager automatically records start/end times, parent-child span relationships, and emits a structured event when it exits.


1c — Use the high-level Trace API (new in 2.0)

import agentobs

agentobs.configure(exporter="console", service_name="my-agent")

with agentobs.start_trace("research-agent") as trace:
    with trace.llm_call("gpt-4o", temperature=0.7) as span:
        result = call_llm(prompt)
        span.set_token_usage(input=512, output=200, total=712)
        span.set_status("ok")
        span.add_event("tool_selected", {"name": "web_search"})

    with trace.tool_call("web_search") as span:
        output = run_search(query)
        span.set_status("ok")

# Inspect the trace in the terminal
trace.print_tree()
# ─ Agent Run: research-agent  [1.2s]
#  ├─ LLM Call: gpt-4o  [0.8s]  in=512 out=200 tokens  $0.0034
#  └─ Tool Call: web_search  [0.4s]  ok

print(trace.summary())
# {'trace_id': '...', 'agent_name': 'research-agent', 'span_count': 3, ...}

The Trace object works with async with too:

async with agentobs.start_trace("async-agent") as trace:
    async with trace.llm_call("gpt-4o") as span:
        response = await async_call_llm(prompt)
        span.set_status("ok")

1b — Auto-instrument the OpenAI client (zero boilerplate)

from agentobs.integrations import openai as openai_integration
import openai, agentobs

# One-time setup: patch the OpenAI SDK
openai_integration.patch()

agentobs.configure(exporter="console", service_name="my-agent")

client = openai.OpenAI()

with agentobs.tracer.span("chat-gpt4o") as span:
    resp = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello"}],
    )
    # span.token_usage, span.cost, and span.model are now populated automatically

patch() wraps every client.chat.completions.create() call (sync and async) so that token_usage, cost, and model are auto-populated on the active span from the API response — no per-call boilerplate required.

# Restore original behaviour when you're done
openai_integration.unpatch()

2 — Record a raw event

from agentobs import Event, EventType, Tags

event = Event(
    event_type=EventType.TRACE_SPAN_COMPLETED,
    source="my-app@1.0.0",          # who emitted this
    org_id="org_acme",              # your organisation
    payload={
        "model": "gpt-4o",
        "prompt_tokens": 512,
        "completion_tokens": 128,
        "latency_ms": 340.5,
    },
    tags=Tags(env="production"),
)

event.validate()         # raises if structure is invalid
print(event.to_json())   # compact JSON string, ready to store or ship

Every event gets a ULID (a time-sortable unique ID) automatically — no need to generate one yourself.


3 — Redact private information before logging

from agentobs import Event, EventType
from agentobs.redact import Redactable, RedactionPolicy, Sensitivity

policy = RedactionPolicy(min_sensitivity=Sensitivity.PII, redacted_by="policy:gdpr-v1")

# Wrap any string that might contain PII
event = Event(
    event_type=EventType.TRACE_SPAN_COMPLETED,
    source="my-app@1.0.0",
    payload={"prompt": Redactable("Call me at 555-867-5309", Sensitivity.PII)},
)
result = policy.apply(event)
# result.event.payload["prompt"] -> "[REDACTED by policy:gdpr-v1]"

Redactable is a string wrapper. You mark fields as sensitive at the point where they are created; the policy decides what to remove before the event is written to any log.

Tip — auto-redact every span: pass redaction_policy=policy to agentobs.configure() and the policy runs automatically inside _dispatch() before any exporter sees the event.


4 — Sign events for tamper-proof audit trails

from agentobs.signing import sign, verify_chain, AuditStream

# Sign a single event
signed = sign(event, org_secret="my-org-secret")

# Or build a chain — every event references the one before it,
# so any gap or modification is immediately detectable.
stream = AuditStream(org_secret="my-org-secret")
for e in events:
    stream.append(e)

result = verify_chain(stream.events, org_secret="my-org-secret")

This is the same principle used in certificate chains and blockchain — each event's signature covers the previous event's signature, so you cannot alter history without breaking the chain.

Tip — auto-sign every span: pass signing_key="your-secret" to agentobs.configure() and every emitted span is signed and chained automatically, with no per-event boilerplate.


5 — Export to anywhere

from agentobs.stream import EventStream
from agentobs.export.jsonl import JSONLExporter
from agentobs.export.webhook import WebhookExporter
from agentobs.export.otlp import OTLPExporter
from agentobs.export.datadog import DatadogExporter
from agentobs.export.grafana import GrafanaLokiExporter

stream = EventStream(events)

# Write everything to a local file
await stream.drain(JSONLExporter("events.jsonl"))

# Ship to your OpenTelemetry collector
await stream.drain(OTLPExporter("http://otel-collector:4318/v1/traces"))

# Send to Datadog APM (traces + metrics)
await stream.drain(DatadogExporter(
    service="my-app",
    env="production",
    agent_url="http://dd-agent:8126",
    api_key="your-dd-api-key",
))

# Push to Grafana Loki
await stream.drain(GrafanaLokiExporter(
    url="http://loki:3100",
    labels={"app": "my-app", "env": "production"},
))

# Fan-out: guard-blocked events -> Slack webhook
await stream.route(
    WebhookExporter("https://hooks.slack.com/your-webhook"),
  predicate=lambda e: e.event_type == "llm.guard.output.blocked",
)

Kafka source

from agentobs.stream import EventStream

# Drain a Kafka topic directly into an EventStream
stream = EventStream.from_kafka(
    topic="llm-events",
    bootstrap_servers="kafka:9092",
    group_id="analytics",
    max_messages=5000,
)
await stream.drain(exporter)

6 — Sync exporters for non-async workflows

from agentobs.exporters.jsonl import SyncJSONLExporter
from agentobs.exporters.console import SyncConsoleExporter

# Log all events to a JSONL file synchronously
exporter = SyncJSONLExporter("events.jsonl")
exporter.export(event)
exporter.close()

# Pretty-print events to the terminal during development
console = SyncConsoleExporter()
console.export(event)

7b — Register lifecycle hooks (new in 2.0)

import agentobs

@agentobs.hooks.on_llm_call
def log_llm(span):
    print(f"LLM called: {span.model}  temp={span.temperature}")

@agentobs.hooks.on_tool_call
def log_tool(span):
    print(f"Tool called: {span.name}")

# Hooks fire automatically for every span of the matching type

7c — Aggregate metrics from a trace file (new in 2.0)

import agentobs
from agentobs.stream import EventStream

events = list(EventStream.from_file("events.jsonl"))
summary = agentobs.metrics.aggregate(events)

print(f"Traces:  {summary.trace_count}")
print(f"Success: {summary.agent_success_rate:.0%}")
print(f"p95 LLM: {summary.llm_latency_ms.p95:.0f} ms")
print(f"Cost:    ${summary.total_cost_usd:.4f}")

7d — Visualize a Gantt timeline (new in 2.0)

from agentobs.debug import visualize

html = visualize(trace.spans, path="trace.html")
# Opens trace.html in a browser — self-contained, no external deps

8a — Semantic cache — skip redundant LLM calls

from agentobs.cache import SemanticCache, InMemoryBackend

cache = SemanticCache(
    backend=InMemoryBackend(max_size=1024),
    similarity_threshold=0.92,   # cosine similarity cutoff
    ttl_seconds=3600,
    namespace="responses",
    emit_events=True,            # emits llm.cache.hit/miss/written events
)

# Or use the @cached decorator on any async function
from agentobs.cache import cached

@cached(threshold=0.92, ttl=3600, emit_events=True)
async def call_llm(prompt: str) -> str:
    # ... real LLM call only on cache miss
    return response

reply = await call_llm("Summarise the AGENTOBS RFC in one sentence.")
# Second call with a semantically identical prompt → instant cache hit, zero tokens spent
reply2 = await call_llm("Give me a one-sentence summary of the AGENTOBS RFC.")

8b — Lint your instrumentation in CI

from agentobs.lint import run_checks

source = open("myapp/pipeline.py").read()
errors = run_checks(source, filename="myapp/pipeline.py")

for err in errors:
    print(f"{err.filename}:{err.line}:{err.col}: {err.code} {err.message}")
# myapp/pipeline.py:42:12: AO002 actor_id receives a bare str; wrap with Redactable()

Or run the CLI against a whole directory:

python -m agentobs.lint myapp/
# AO001  Event() missing required field 'payload'     myapp/pipeline.py:17
# AO004  LLM call outside tracer span context         myapp/pipeline.py:53
# 2 errors in 1 file.

# Plug into flake8 / ruff automatically (entry-point registered in pyproject.toml):
flake8 myapp/

9 — Check compliance and inspect events from the command line

agentobs check                           # end-to-end health check (config → export → trace store)
agentobs check-compat events.json        # v2.0 compatibility checklist
agentobs validate events.jsonl           # JSON Schema validation per event
agentobs audit-chain events.jsonl        # verify HMAC signing chain integrity
agentobs inspect <EVENT_ID> events.jsonl # pretty-print a single event
agentobs stats events.jsonl              # summary: counts, tokens, cost, timestamps
agentobs list-deprecated                 # list all deprecated event types
agentobs migration-roadmap [--json]      # v2 migration roadmap
agentobs check-consumers                 # consumer registry compatibility check
CHK-1  All required fields present          (500 / 500 events)
CHK-2  Event types valid                    (500 / 500 events)
CHK-3  Source identifiers well-formed       (500 / 500 events)
CHK-5  Event IDs are valid ULIDs            (500 / 500 events)
All checks passed.

Drop any of these into your CI pipeline to catch schema drift, signing failures, or schema-breaking migrations before they reach production.


What is inside the box

ModuleWhat it doesFor whom
agentobs.event The core Event envelope — the one structure all tools share Everyone
agentobs.types All built-in event type strings (trace, cost, cache, eval, guard…) Everyone
agentobs.config configure() and get_config() — global SDK configuration Everyone
agentobs._span Span, AgentRun, AgentStep context managers — the runtime tracing API. Uses contextvars for safe async/thread context propagation. Supports async with, span.add_event(), span.set_timeout_deadline() App developers
agentobs._trace Trace object and start_trace() — high-level, imperative tracing entry point; accumulates all child spans App developers
agentobs.debug print_tree(), summary(), visualize() — terminal tree, stats dict, and self-contained HTML Gantt timeline App developers
agentobs.metrics aggregate() and MetricsSummary — compute success rates, latency percentiles, token totals, and cost breakdowns from any Iterable[Event] Data / analytics engineers
agentobs._store TraceStore — in-memory ring buffer; get_trace(), list_tool_calls(), list_llm_calls() Platform / tooling engineers
agentobs._hooks HookRegistry / hooks — global span lifecycle hooks: @hooks.on_llm_call, @hooks.on_tool_call, @hooks.on_agent_start, @hooks.on_agent_end. Async variants: @hooks.on_llm_call_async, @hooks.on_tool_call_async, @hooks.on_agent_start_async, @hooks.on_agent_end_async — fired via asyncio.ensure_future(). App developers / platform
agentobs._cli 9 CLI sub-commands: check, check-compat, validate, audit-chain, inspect, stats, list-deprecated, migration-roadmap, check-consumers DevOps / CI teams
agentobs.redact PII detection, sensitivity levels, redaction policies Data privacy / GDPR teams
agentobs.signing HMAC-SHA256 event signing and tamper-evident audit chains Security / compliance teams
agentobs.compliance Programmatic v2.0 compatibility checks — no pytest required Platform / DevOps teams
agentobs.export Ship events to files (JSONL), HTTP webhooks, OTLP collectors, Datadog APM, or Grafana Loki Infra / observability teams
agentobs.exporters Sync exporters — SyncJSONLExporter and SyncConsoleExporter for non-async code App developers
agentobs.stream Fan-out router — one drain() call reaches multiple backends; Kafka source via from_kafka() Platform engineers
agentobs.validate JSON Schema validation against the published v2.0 schema All teams
agentobs.consumer Declare schema-namespace dependencies; fail fast at startup if version requirements are not met Platform / integration teams
agentobs.governance Policy-based event gating — block prohibited types, warn on deprecated usage, enforce custom rules Platform / compliance teams
agentobs.deprecations Register and surface per-event-type deprecation notices at runtime Library maintainers
agentobs.testing Test utilities: MockExporter, capture_events() context manager, assert_event_schema_valid(), and trace_store() isolated store context manager. Write unit tests for your AI pipeline without real exporters. App developers / test authors
agentobs.auto Integration auto-discovery: agentobs.auto.setup() auto-patches every installed LLM integration (OpenAI, Anthropic, Ollama, Groq, Together AI). setup() must be called explicitly; agentobs.auto.teardown() cleanly unpatches all. App developers
agentobs.integrations Plug-in adapters for OpenAI (auto-instrumentation via patch()), LangChain, LlamaIndex, Anthropic, Groq, Ollama, Together, and CrewAI (AgentOBSCrewAIHandler + patch()). agentobs.integrations._pricing ships a static USD/1M-token pricing table for all current OpenAI models. App developers
agentobs.namespaces Typed payload dataclasses for all 10 built-in event namespaces Tool authors
agentobs.models Optional Pydantic v2 models for teams that prefer validated schemas API / backend teams
agentobs.trace @trace() decorator — wraps sync/async functions, auto-emits span start/end events with timing and error capture. agentobs.export.otlp_bridge converts spans to OTLP proto dicts. App developers
agentobs.cost CostTracker, BudgetMonitor, @budget_alert, emit_cost_event(), cost_summary() — track and alert on token spend across a session App developers / FinOps
agentobs.inspect InspectorSession context manager + inspect_trace() — intercept and record tool call arguments, results, latency, and errors within a trace Platform / debugging
agentobs.toolsmith @tool decorator + ToolRegistry — register functions as typed tools; build_openai_schema() / build_anthropic_schema() render JSON schemas for function-calling APIs App developers
agentobs.retry @retry with exponential back-off, FallbackChain, CircuitBreaker, CostAwareRouter — resilient LLM provider routing with observability events at each step App developers / SREs
agentobs.cache SemanticCache + @cached decorator — deduplicate LLM calls via cosine-similarity matching; pluggable backends: InMemoryBackend, SQLiteBackend, RedisBackend; emits llm.cache.* events App developers / FinOps
agentobs.lint run_checks(source, filename) — AST-based instrumentation linter; five AO-codes (AO001–AO005); flake8 plugin; python -m agentobs.lint CLI All teams / CI pipelines

Event namespaces

Every event carries a payload — a dictionary whose shape is defined by the event's namespace. The ten built-in namespaces cover everything from raw model traces to safety guardrails:

Namespace prefix Dataclass What it records
llm.trace.* SpanPayload, AgentRunPayload, AgentStepPayload Model call — tokens, latency, finish reason (frozen v2)
llm.cost.* CostPayload Per-call cost in USD
llm.cache.* CachePayload Cache hit/miss, backend, TTL
llm.eval.* EvalScenarioPayload Scores, labels, evaluator identity
llm.guard.* GuardPayload Safety classifier output, block decisions
llm.fence.* FencePayload Topic constraints, allow/block lists
llm.prompt.* PromptPayload Prompt template version, rendered text
llm.redact.* RedactPayload PII audit record — what was found and removed
llm.diff.* DiffPayload Prompt/response delta between two events
llm.template.* TemplatePayload Template registry metadata
from agentobs.namespaces.trace import SpanPayload
from agentobs import Event

payload = SpanPayload(
    span_name="call-llm",
    span_id="abc123",
    trace_id="def456",
    start_time_ns=1_000_000_000,
    end_time_ns=1_340_000_000,
    status="ok",
)

event = Event(
    event_type="llm.trace.span.completed",
    source="my-app@1.0.0",
    payload=payload.to_dict(),
)

Quality standards

  • 3 032 tests (2 990 passing, 42 skipped) — unit, integration, property-based (Hypothesis), and performance benchmarks
  • ≥ 92.84 % line and branch coverage — measured with pytest-cov; 90 % minimum enforced in CI
  • Zero required dependencies — the entire core runs on Python's standard library alone
  • Typed — full py.typed marker; works with mypy and pyright out of the box
  • Frozen v2 trace schemallm.trace.* payload fields will never break between minor releases
  • async-safe context propagationcontextvars-based span stacks work correctly across asyncio tasks, thread pools, and executors
  • Version 1.0.7 adds: @trace() decorator, OTLP bridge, CostTracker / BudgetMonitor, InspectorSession, ToolRegistry / @tool, @retry / FallbackChain / CircuitBreaker, SemanticCache / @cached, and agentobs.lint (AO001–AO005, flake8 plugin, CLI)
  • Version 2.0.0 adds: Trace / start_trace(), async with, span.add_event(), print_tree() / summary() / visualize(), sampling controls, metrics.aggregate(), TraceStore, HookRegistry, CrewAI integration
  • Version 1.0.6 adds: agentobs.testing, agentobs.auto, async lifecycle hooks, agentobs check CLI, export retry with back-off, unpatch() / is_patched() for all integrations, frozen payload dataclasses, assert_no_sunset_reached()

Project structure

agentobs/
├── __init__.py       <- Public API surface (start here)
├── event.py          <- The Event envelope
├── types.py          <- EventType enum  (+ SpanErrorCategory)
├── config.py         <- configure() / get_config() / AgentOBSConfig
│                        (sample_rate, always_sample_errors, include_raw_tool_io,
│                         enable_trace_store, trace_store_size)
├── _span.py          <- Span, AgentRun, AgentStep context managers
│                        (contextvars stacks, async with, add_event,
│                         record_error, set_timeout_deadline)
├── _trace.py         <- Trace class + start_trace()          [NEW in 2.0]
├── _tracer.py        <- Tracer — top-level tracing entry point
├── _stream.py        <- Internal dispatch: sample → redact → sign → export
├── _store.py         <- TraceStore ring buffer                [NEW in 2.0]
├── _hooks.py         <- HookRegistry singleton (hooks)        [NEW in 2.0]
├── _cli.py           <- CLI entry-point (9 sub-commands: check, check-compat, …)
├── trace.py          <- @trace() decorator + SpanOTLPBridge   [NEW in 1.0.7]
├── cost.py           <- CostTracker, BudgetMonitor, @budget_alert [NEW in 1.0.7]
├── inspect.py        <- InspectorSession, inspect_trace()     [NEW in 1.0.7]
├── toolsmith.py      <- @tool, ToolRegistry, build_openai_schema() [NEW in 1.0.7]
├── retry.py          <- @retry, FallbackChain, CircuitBreaker [NEW in 1.0.7]
├── cache.py          <- SemanticCache, @cached, *Backend      [NEW in 1.0.7]
├── lint/             <- run_checks(), AO001-AO005, flake8 plugin, CLI [NEW in 1.0.7]
│   ├── __init__.py
│   ├── _visitor.py
│   ├── _checks.py
│   ├── _flake8.py
│   └── __main__.py
├── testing.py        <- MockExporter, capture_events(), assert_event_schema_valid(),
│                        trace_store() — test utilities without real exporters [1.0.6]
├── auto.py           <- Integration auto-discovery; setup() / teardown()        [1.0.6]
├── debug.py          <- print_tree, summary, visualize        [NEW in 2.0]
├── metrics.py        <- aggregate(), MetricsSummary, etc.     [NEW in 2.0]
├── signing.py        <- HMAC signing & audit chains
├── redact.py         <- PII redaction
├── validate.py       <- JSON Schema validation
├── consumer.py       <- Consumer registry & schema-version compatibility
├── governance.py     <- Event governance policies
├── deprecations.py   <- Per-event-type deprecation tracking
├── compliance/       <- Compatibility checklist suite
├── export/
│   ├── jsonl.py      <- Local file export (async)
│   ├── webhook.py    <- HTTP POST export
│   ├── otlp.py       <- OpenTelemetry export
│   ├── datadog.py    <- Datadog APM traces + metrics
│   └── grafana.py    <- Grafana Loki export
├── exporters/
│   ├── jsonl.py      <- SyncJSONLExporter
│   └── console.py    <- SyncConsoleExporter
├── stream.py         <- EventStream fan-out router (+ Kafka source)
├── integrations/
│   ├── langchain.py  <- LangChain callback handler
│   ├── llamaindex.py <- LlamaIndex event handler
│   ├── openai.py     <- OpenAI tracing wrapper
│   ├── crewai.py     <- CrewAI handler + patch()              [NEW in 2.0]
│   └── ...           (anthropic, groq, ollama, together)
├── namespaces/       <- Typed payload dataclasses
│   ├── trace.py        (SpanPayload + temperature/top_p/max_tokens/error_category,
│   │                    SpanEvent, ToolCall + arguments_raw/result_raw/retry_count)
│   ├── cost.py
│   ├── cache.py
│   └── ...
├── models.py         <- Optional Pydantic v2 models
└── migrate.py        <- Schema migration helpers
examples/             <- Runnable sample scripts
├── openai_chat.py    <- OpenAI + JSONL export
├── agent_workflow.py <- Multi-step agent + console exporter
├── langchain_chain.py<- LangChain callback handler
└── secure_pipeline.py<- HMAC signing + PII redaction together

Development setup

git clone https://github.com/veerarag1973/agentobs.git
cd agentobs

python -m venv .venv
.venv\Scripts\activate          # Windows
# source .venv/bin/activate     # macOS / Linux

pip install -e ".[dev]"
pytest                          # run all 3 032 tests
Code quality commands
ruff check .                  # linting
ruff format .                 # auto-format
mypy agentobs                  # type checking
pytest --cov                  # tests + coverage report (>=90% required)
Build the docs locally
pip install -e ".[docs]"
cd docs
sphinx-build -b html . _build/html   # open _build/html/index.html

Compatibility and versioning

agentobs implements RFC-0001 AGENTOBS (Observability Schema Standard for Agentic AI Systems). The current schema version is 2.0.

This project follows Semantic Versioning:

  • Patch releases (1.0.x) — bug fixes only, fully backwards-compatible
  • Minor releases (1.x.0) — new features, backwards-compatible
  • Major releases (x.0.0) — breaking changes, announced in advance

The llm.trace.* namespace payload schema is additionally frozen at v2: even a major release will not remove or rename fields from SpanPayload, AgentRunPayload, or AgentStepPayload.


Changelog

See docs/changelog.md for the full version history.


Contributing

Contributions are welcome! Please read the Contributing Guide first, then open an issue or pull request.

Key rules:

  • All new code must maintain >= 90 % test coverage
  • Follow the existing Google-style docstrings
  • Run ruff and mypy before submitting

License

MIT — free for personal and commercial use.


Made with care for the AI observability community.
Docs · Quickstart · API Reference · Report a bug

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentobs-1.0.8.tar.gz (993.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentobs-1.0.8-py3-none-any.whl (263.5 kB view details)

Uploaded Python 3

File details

Details for the file agentobs-1.0.8.tar.gz.

File metadata

  • Download URL: agentobs-1.0.8.tar.gz
  • Upload date:
  • Size: 993.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for agentobs-1.0.8.tar.gz
Algorithm Hash digest
SHA256 bc2dec00af92921f879728ed3864519b673c272dc484e694959c995c6e4064f2
MD5 8431d1132adf7f1746d5c3d5609e8722
BLAKE2b-256 69015fea4e7fb91b7904c195eb17194953b1011be22bf5dadd49b8784b487494

See more details on using hashes here.

File details

Details for the file agentobs-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: agentobs-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 263.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for agentobs-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e270720734d27552efb3cd2a6304a8c883540d66399161e32e45badb0d2fa47e
MD5 14f9e75687e9f6c77c549060ba88db94
BLAKE2b-256 77ab1e1cd8a3dd1f94720c067dedc60864a6df2390eb2a6fd3d676119b5430b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page