Skip to main content

Tracentic SDK for Python - LLM observability with scoped tracing and OTLP export

Project description

Tracentic Python SDK

LLM observability with scoped tracing and OTLP export for Python applications.

Installation

pip install tracentic

Requires Python 3.10+. The only runtime dependency is httpx.

Endpoint

Point the SDK at the Tracentic ingestion endpoint by setting endpoint="https://tracentic.dev" on TracenticOptions. This is the hosted service URL that receives spans over OTLP/HTTP JSON - use it unless you're running a self-hosted Tracentic deployment, in which case set your own URL.

tracentic = create_tracentic(TracenticOptions(
    api_key="your-api-key",
    endpoint="https://tracentic.dev",
    service_name="my-service",
))

Quick start

import asyncio
from datetime import datetime, timezone
from tracentic import TracenticOptions, TracenticSpan, create_tracentic

from tracentic import ModelPricing

tracentic = create_tracentic(TracenticOptions(
    api_key="your-api-key",
    endpoint="https://tracentic.dev",
    service_name="my-service",
    environment="production",
    # Required for cost tracking. Without this, llm.cost.total_usd is
    # omitted and the SDK warns once per unpriced model.
    custom_pricing={
        "claude-sonnet-4-20250514": ModelPricing(3.0, 15.0),
        "gpt-4o": ModelPricing(2.5, 10.0),
    },
))

async def summarize(text: str) -> str:
    scope = tracentic.begin("summarize", attributes={"user_id": "user-123"})

    started_at = datetime.now(timezone.utc)
    result = await call_llm(text)
    ended_at = datetime.now(timezone.utc)

    # Pass span fields as keyword arguments - no need to construct
    # TracenticSpan manually. (You can still pass a TracenticSpan
    # instance if you prefer.)
    tracentic.record_span(
        scope,
        started_at=started_at,
        ended_at=ended_at,
        provider="anthropic",
        model="claude-sonnet-4-20250514",
        input_tokens=result.usage.input_tokens,
        output_tokens=result.usage.output_tokens,
        operation_type="chat",
    )

    return result.text

Singleton pattern

If you prefer a global instance:

from tracentic import configure, get_tracentic

# At startup
configure(TracenticOptions(api_key="...", service_name="my-service"))

# Anywhere else
tracentic = get_tracentic()

Features

Scoped tracing

Group related LLM calls under a logical scope. Nest scopes for multi-step pipelines:

pipeline = tracentic.begin("rag-pipeline", correlation_id="order-42")

# Child scope inherits the parent link automatically
synthesis = pipeline.create_child("synthesis", attributes={"strategy": "hybrid"})

Error recording

tracentic.record_error(scope, span, RuntimeError("rate limited"))

Scopeless spans

For standalone LLM calls that don't belong to a larger operation:

tracentic.record_span(TracenticSpan(
    started_at=started_at,
    ended_at=ended_at,
    provider="openai",
    model="gpt-4o-mini",
    input_tokens=200,
    output_tokens=50,
    operation_type="chat",
))

Custom pricing

custom_pricing is required for cost tracking. The SDK does not ship with built-in pricing because model prices change frequently and vary by contract. If a span has token data but no matching pricing entry, llm.cost.total_usd is omitted and the SDK logs a warning once per model on the tracentic logger.

from tracentic import ModelPricing

tracentic = create_tracentic(TracenticOptions(
    api_key="...",
    custom_pricing={
        "claude-sonnet-4-20250514": ModelPricing(3.0, 15.0),
        "gpt-4o": ModelPricing(2.5, 10.0),
    },
))

Global attributes

Pass global_attributes to create_tracentic() via TracenticOptions to tag every span this service emits with the same static values - region, deployment version, owning team, cluster name. They're resolved once at startup and merged into every span without per-call bookkeeping:

tracentic = create_tracentic(TracenticOptions(
    api_key="...",
    service_name="my-service",
    environment="production",
    global_attributes={
        "region": "us-east-1",
        "version": "2.1.0",
        "team": "platform",
    },
))

# Every span this service emits now carries region, version, team.

Scope and per-span attributes override global values on key collision, so global_attributes is the right layer for defaults you want everywhere unless something more specific says otherwise:

scope = tracentic.begin("request", attributes={"region": "us-west-2"})
# Spans in this scope carry region="us-west-2" (scope wins over global).

For values that change after startup - a deploy ID rotated by a background job, a maintenance-mode flag - use TracenticGlobalContext to set/remove entries at runtime:

from tracentic import TracenticGlobalContext

TracenticGlobalContext.current.set("deploy_id", "deploy-abc")
# ... spans recorded now include deploy_id ...
TracenticGlobalContext.current.remove("deploy_id")

TracenticGlobalContext is process-wide (not contextvars-based), so values set from one request's handler will leak into every other request running concurrently. For ambient per-request data (user ID, tenant, request ID), use the ASGI middleware below instead.

ASGI middleware

Inject per-request attributes for the duration of each HTTP request. Works with FastAPI, Starlette, and any ASGI framework:

from tracentic.middleware.asgi import TracenticMiddleware

app = TracenticMiddleware(
    app,
    request_attributes=lambda scope: {
        "method": scope.get("method"),
        "path": scope.get("path"),
    },
)

Cross-service linking

Tracentic does not propagate scope IDs automatically - you pass them explicitly through whatever transport connects your services (HTTP headers, message properties, etc.).

For cross-service linking to work, both services must integrate the Tracentic SDK (or implement the OTLP JSON ingest API directly) and their API keys must belong to the same tenant. Spans from different tenants are isolated and cannot be linked.

Use the exported TRACENTIC_SCOPE_HEADER constant on both ends rather than a string literal - typos silently break linking.

Via HTTP header:

from tracentic import TRACENTIC_SCOPE_HEADER

# Service A - outgoing request
scope = tracentic.begin("gateway-handler")
response = await httpx.post(
    "https://worker.internal/process",
    headers={TRACENTIC_SCOPE_HEADER: scope.id},
)

# Service B - incoming request (FastAPI example)
@app.post("/process")
async def process(request: Request):
    parent_scope_id = request.headers.get(TRACENTIC_SCOPE_HEADER)
    linked = tracentic.begin("worker", parent_scope_id=parent_scope_id)

Via message queue:

from tracentic import TRACENTIC_SCOPE_HEADER

# Producer
scope = tracentic.begin("order-processor")
await queue.send(
    body=payload,
    properties={TRACENTIC_SCOPE_HEADER: scope.id},
)

# Consumer
async def handle(message):
    parent_scope_id = message.properties[TRACENTIC_SCOPE_HEADER]
    linked = tracentic.begin("fulfillment", parent_scope_id=parent_scope_id)

Shutdown

Buffered spans are flushed automatically at process exit via an atexit handler, so you don't need to call shutdown() in normal use. Call it explicitly only if you want to flush at a specific point (e.g. before forking or when atexit won't run, such as on os._exit() or fatal signals):

await tracentic.shutdown()

Serverless (AWS Lambda, Google Cloud Functions)

Serverless runtimes freeze or kill the process between invocations, so the atexit handler may never fire and any spans still in the buffer are lost. Always await tracentic.shutdown() before your handler returns:

async def handler(event, context):
    try:
        return await do_work(event)
    finally:
        # Flush before the runtime freezes the container
        await tracentic.shutdown()

Without this, you will see spans appear inconsistently - only when a container happens to be reused and the next invocation triggers a flush.

Configuration reference

Option Default Description
api_key None API key. If None, spans are created locally but not exported
service_name "unknown-service" Service identifier in the dashboard
endpoint "https://tracentic.dev" Tracentic ingestion endpoint. Use https://tracentic.dev for the hosted service. Override only for self-hosted deployments.
environment "production" Deployment environment tag
custom_pricing None Model pricing for cost calculation
global_attributes None Static attributes on every span
attribute_limits platform defaults Limits on attribute count, key/value length
export_timeout_s 30.0 Per-request timeout in seconds for OTLP exports
debug False Enable verbose diagnostic logging (see Debugging below)

Debugging

By default the SDK only logs warnings and errors - export failures, queue overflow, and missing pricing entries. To see the full lifecycle of spans (enqueue, flush, export success, shutdown), enable debug mode:

tracentic = create_tracentic(TracenticOptions(
    api_key="...",
    service_name="my-service",
    debug=True,
))

With debug=True, the SDK sets the tracentic logger to DEBUG level and attaches a stream handler (if none exists) so messages appear on stderr. Example output:

[tracentic] debug logging enabled
[tracentic] enqueued span 'llm.anthropic.chat' (queue: 1)
[tracentic] flushing 1 span(s) to https://tracentic.dev/v1/ingest
[tracentic] export succeeded: 200 (1 spans)
[tracentic] shutting down exporter...
[tracentic] exporter shutdown complete

Warnings are always emitted regardless of the debug flag:

WARNING:tracentic:Tracentic export failed: 401 Unauthorized - {"error":"invalid api key"}
WARNING:tracentic:Tracentic export error: ConnectError(...)
WARNING:tracentic:Tracentic export queue full (512) - dropping oldest span

If you already configure the tracentic logger elsewhere (e.g. via logging.basicConfig or a framework), you can skip the debug flag and set the level yourself:

import logging
logging.getLogger("tracentic").setLevel(logging.DEBUG)

Export timeout

The export_timeout_s option controls the per-request timeout for OTLP exports (default: 30 seconds). If exports are timing out in your environment (e.g. CI runners, serverless cold starts), increase it:

tracentic = create_tracentic(TracenticOptions(
    api_key="...",
    export_timeout_s=60.0,  # 60 seconds
))

Development

python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

Running tests

# All tests
pytest

# Verbose output
pytest -v

# A single test file
pytest tests/test_scope.py

# A single test
pytest tests/test_scope.py::TestTracenticScope::test_create_child_sets_parent_id

Test files

File What it covers
test_tracentic.py SDK factory, singleton, begin/record_span/record_error, cost calculation
test_scope.py Scope creation, nesting, defensive copying, unique IDs
test_global_context.py Global context set/get/remove, singleton access, snapshots
test_attribute_merger.py Three-layer merge priority, key/value truncation, count cap
test_options.py AttributeLimits defaults, clamping, platform constants
test_exporter.py OTLP JSON structure, endpoint, headers, overflow, error handling

Linting and type checking

ruff check src/ tests/
mypy src/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tracentic-0.3.0.tar.gz (28.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tracentic-0.3.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file tracentic-0.3.0.tar.gz.

File metadata

  • Download URL: tracentic-0.3.0.tar.gz
  • Upload date:
  • Size: 28.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tracentic-0.3.0.tar.gz
Algorithm Hash digest
SHA256 e436fcca77fa8cf0adbcaad737047a6d891ae2f22104dcac70bf48a00193015d
MD5 d304dbb130fc3a82531b14843e73c660
BLAKE2b-256 f978c87902959e7a57b8893b84267812f3f195740029723f2141155eb2c026a8

See more details on using hashes here.

Provenance

The following attestation bundles were made for tracentic-0.3.0.tar.gz:

Publisher: publish.yml on tracentic/tracentic-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tracentic-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: tracentic-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tracentic-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9f91d6ff6976203726139e5412d2cbfc0e519d27ec8dfc39cb5b31b2692c0602
MD5 6c64d47955aaec9acf77dd425000eb90
BLAKE2b-256 2df87dfe4bf379f60b2afd61de240f2353c39f91015384febefef735a428e583

See more details on using hashes here.

Provenance

The following attestation bundles were made for tracentic-0.3.0-py3-none-any.whl:

Publisher: publish.yml on tracentic/tracentic-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page