First-party OpenInference-shaped tracing for Python LLM and agent applications on Catalyst by Inference.net.
Project description
catalyst-tracing
First-party OpenInference-shaped tracing for Python LLM and agent applications running on Catalyst by Inference.net.
catalyst-tracing gives you one Python package for instrumenting common model
SDKs, agent frameworks, and custom agent work. It emits OpenTelemetry spans with
OpenInference-compatible attributes over OTLP/HTTP so Catalyst can display model
calls, tool calls, prompts, responses, token usage, and parent-child agent flows.
This package is currently in beta. APIs may change before 1.0, but the package
name and import path are intended to remain stable.
Install
Install the base tracing runtime:
pip install catalyst-tracing
Install only the integrations your application uses:
pip install 'catalyst-tracing[openai]'
pip install 'catalyst-tracing[anthropic]'
pip install 'catalyst-tracing[langchain]'
pip install 'catalyst-tracing[langgraph]'
pip install 'catalyst-tracing[langsmith]'
pip install 'catalyst-tracing[openai-agents]'
pip install 'catalyst-tracing[claude-agent-sdk]'
pip install 'catalyst-tracing[pydantic-ai]'
You can combine extras:
pip install 'catalyst-tracing[openai,anthropic,langchain]'
Quick Start
Set your Catalyst endpoint and token:
export CATALYST_OTLP_ENDPOINT="https://your-catalyst-otlp-endpoint"
export CATALYST_OTLP_TOKEN="your-token"
export CATALYST_SERVICE_NAME="checkout-agent"
Initialize tracing before creating SDK clients:
from catalyst_tracing import setup
from openai import OpenAI
tracing = setup()
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Summarize this order."}],
)
tracing.shutdown()
The OpenAI call is captured as an OpenInference-shaped LLM span and exported to Catalyst through OTLP/HTTP.
What It Instruments
| Integration | Install extra | What is captured |
|---|---|---|
| OpenAI | openai |
Chat Completions, Responses, sync clients, and async clients |
| Anthropic | anthropic |
Messages API calls, sync clients, and async clients |
| LangChain | langchain |
Callback-manager driven chain, model, tool, and retriever spans |
| LangGraph | langgraph |
Graph and node spans through the LangChain callback path |
| LangSmith | langsmith |
LangSmith OpenTelemetry spans bridged into the Catalyst provider |
| OpenAI Agents | openai-agents |
Agent runs plus nested OpenAI model spans |
| Claude Agent SDK | claude-agent-sdk |
query() calls and yielded agent messages |
| Pydantic AI | pydantic-ai |
Pydantic AI's native OpenTelemetry instrumentation |
The base package includes the tracing runtime. Extras install the upstream SDKs themselves so you can keep production environments narrow.
LangSmith Decorator Integration
Applications that already use LangSmith decorators do not need Catalyst-specific
wrappers around those functions. Keep using LangSmith's @traceable decorator
and initialize Catalyst once at process startup:
import os
from catalyst_tracing import setup
from langsmith import Client, traceable
os.environ["LANGSMITH_TRACING"] = "true"
tracing = setup(service_name="support-agent")
client = Client()
@traceable(name="lookup_order", run_type="tool", client=client, enabled=True)
def lookup_order(order_id: str) -> str:
return f"order {order_id} is shipped"
lookup_order("ABC-123")
client.flush()
tracing.shutdown()
When LANGSMITH_TRACING=true is set and no LangSmith OTel mode is configured,
Catalyst defaults LangSmith to hybrid mode. LangSmith still receives its own
traces, and Catalyst receives the OpenTelemetry spans from the same decorators.
Set LANGSMITH_TRACING_MODE=otel when you want LangSmith to emit only OTel
spans for this process.
The SDK enriches LangSmith spans before export so Catalyst can render them like the rest of your agent trace:
| LangSmith signal | Catalyst/OpenInference attribute |
|---|---|
run_type / langsmith.span.kind |
openinference.span.kind |
gen_ai.prompt |
input.value with input.mime_type=application/json |
gen_ai.completion |
output.value with output.mime_type=application/json |
| tool trace name | tool.name |
| agent trace name | agent.name |
| LangSmith model metadata | llm.model_name, llm.provider |
This also works with LangChain or LangGraph spans emitted under a LangSmith decorated root span. Catalyst captures the LangChain model/tool/retriever spans through its LangChain integration and preserves the LangSmith decorator spans in the same trace tree.
Public API
Most applications only need setup():
from catalyst_tracing import setup
tracing = setup(
service_name="support-agent",
service_version="0.4.0",
)
setup() returns a CatalystTracing handle with:
| Attribute | Purpose |
|---|---|
provider |
OpenTelemetry TracerProvider configured for Catalyst export |
tracer |
Tracer for manual spans |
install_results |
Per-integration install results |
shutdown() |
Flush and close tracing before process exit |
You can also import integration installers directly:
from catalyst_tracing import setup
from catalyst_tracing.openai import install_openai
tracing = setup(auto_instrument=False)
install_openai(tracing.provider)
Available entry-point modules:
| Import | Export |
|---|---|
catalyst_tracing.openai |
install_openai |
catalyst_tracing.anthropic |
install_anthropic |
catalyst_tracing.langchain |
install_langchain |
catalyst_tracing.langgraph |
install_langgraph |
catalyst_tracing.langsmith |
install_langsmith |
catalyst_tracing.openai_agents |
install_openai_agents |
catalyst_tracing.claude_agent_sdk |
install_claude_agent_sdk |
catalyst_tracing.pydantic_ai |
install_pydantic_ai |
Manual Agent Spans
Use manual_span() when work does not go through a supported SDK, such as a
custom router, planner, evaluator, provider-failover step, or tool executor.
It accepts OpenInference span kinds, provider-shaped usage payloads, structured
inputs/outputs, and OTel-safe custom attributes.
from catalyst_tracing import SpanKindValues, manual_span, setup
tracing = setup()
with manual_span(
tracing.tracer,
name="question_generation/bloom",
span_kind=SpanKindValues.CHAIN,
system="fireworks",
input={"template_id": "bloom", "total_questions": 10},
model="accounts/fireworks/models/gpt-oss-120b",
metadata={"deck_id": "deck_123"},
tags=["question-gen-agent", "template:bloom"],
) as span:
result = run_question_generation()
span.set_output({"question_count": len(result.questions)})
span.record_usage(result.usage)
tracing.shutdown()
agent_span() remains available as a focused convenience wrapper for AGENT
spans:
from catalyst_tracing import agent_span, setup
tracing = setup()
with agent_span(tracing.tracer, name="RefundReviewAgent", system="internal") as span:
span.set_input("Review refund request #1842")
decision = run_refund_review()
span.set_output(decision.summary)
span.record_tokens(prompt=820, completion=160)
tracing.shutdown()
Any child spans created inside the context automatically parent under the agent span through standard OpenTelemetry context propagation.
When you already have an active span, reusable helpers are exported for common manual instrumentation tasks:
from catalyst_tracing import record_span_usage, set_span_attributes
record_span_usage(span, {"prompt_tokens": "820", "completion_tokens": 160})
set_span_attributes(span, {"metadata": {"deck_id": "deck_123"}})
Configuration
You can configure tracing with keyword arguments or environment variables.
| Option | Environment variable | Default |
|---|---|---|
endpoint |
CATALYST_OTLP_ENDPOINT |
http://localhost:8799 |
token |
CATALYST_OTLP_TOKEN |
unset |
service_name |
CATALYST_SERVICE_NAME |
generated catalyst-app-* name |
service_version |
CATALYST_SERVICE_VERSION |
0.0.5 |
debug |
CATALYST_DEBUG |
false |
batching |
none | "batch" |
Legacy OTLP_ENDPOINT, OTLP_INGEST_TOKEN, and SERVICE_NAME variables are
also accepted for compatibility.
Span Shape
Spans use OpenInference-style semantic attributes so LLM-aware viewers can understand them without custom adapters:
| Attribute family | Examples |
|---|---|
| Span kind | openinference.span.kind |
| Inputs and outputs | input.value, output.value |
| Messages | llm.input_messages.*, llm.output_messages.* |
| Model metadata | llm.model_name, llm.invocation_parameters |
| Token counts | llm.token_count.prompt, llm.token_count.completion, llm.token_count.total |
| Provider/system | gen_ai.system |
Constants are exported for custom spans:
from catalyst_tracing import Attr, SpanKindValues
span.set_attribute(Attr.SPAN_KIND, SpanKindValues.LLM.value)
span.set_attribute(Attr.MODEL_NAME, "gpt-4o-mini")
Error Handling
The package raises typed errors for misuse and returns structured install results for optional integrations:
from catalyst_tracing import CatalystTracingError, InvalidTracerProviderError
from catalyst_tracing.openai import install_openai
try:
result = install_openai(provider)
except InvalidTracerProviderError as exc:
print(exc.code)
except CatalystTracingError:
raise
Each installer returns an InstrumentResult with:
| Field | Meaning |
|---|---|
name |
Integration name |
installed |
Whether instrumentation was installed |
code |
Stable status code such as INSTALLED or SDK_NOT_INSTALLED |
reason |
Human-readable detail when installation is skipped |
Package Names
The primary package is catalyst-tracing and the primary import path is
catalyst_tracing.
Inference also publishes inference-catalyst-tracing as a company-qualified
install name. It depends on this package and re-exports the same public API from
the inference_catalyst_tracing import path.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file catalyst_tracing-0.0.5.tar.gz.
File metadata
- Download URL: catalyst_tracing-0.0.5.tar.gz
- Upload date:
- Size: 39.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d92dd45b8dc0112559bf46811445140c810cb3f0079dd0dd46df4ef987905a16
|
|
| MD5 |
82791b12b5ffda0776be37ee009d20c8
|
|
| BLAKE2b-256 |
5268d11f3f11e9a633ba99f8befd5c1f43a9c31861e1e4e9142ed7795a3f6cf0
|
File details
Details for the file catalyst_tracing-0.0.5-py3-none-any.whl.
File metadata
- Download URL: catalyst_tracing-0.0.5-py3-none-any.whl
- Upload date:
- Size: 56.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
617a49f23a9520d5b627256a16b11269199ed60a31e19e063496b76f34d2bed9
|
|
| MD5 |
cf1f3b5e162f4254a27f9859f8e09756
|
|
| BLAKE2b-256 |
486e980cd3d75135863dc1fdd064ce9d96be568c5e8482e0ac331d613435ba37
|