Skip to main content

LLM prompt management and tracing SDK

Project description

Tracia

LLM prompt management and tracing SDK for Python

PyPI version Python 3.10+ License: MIT

What is Tracia?

Tracia is a modern LLM prompt management and tracing platform. This Python SDK provides:

  • Unified LLM Access - Call OpenAI, Anthropic, Google, and 100+ providers through a single interface (powered by LiteLLM)
  • Automatic Tracing - Every LLM call is automatically traced with latency, token usage, and cost
  • Prompt Management - Store, version, and manage your prompts in the cloud
  • Session Linking - Easily link related calls for multi-turn conversations

Installation

pip install tracia

You'll also need API keys for the LLM providers you want to use:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

Quick Start

from tracia import Tracia

# Initialize the client
client = Tracia(api_key="your_tracia_api_key")

# Run a local prompt
result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(result.text)
print(f"Tokens: {result.usage.total_tokens}")

Streaming

# Stream the response
stream = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk, end="", flush=True)

# Get the final result (stream.result is a Future[StreamResult])
final = stream.result.result()
print(f"\nTotal tokens: {final.usage.total_tokens}")

Multi-turn Conversations with Sessions

# Create a session for linked conversations
session = client.create_session()

# First message
r1 = session.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "My name is Alice"}]
)

# Follow-up - automatically linked to the same trace
r2 = session.run_local(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "My name is Alice"},
        {"role": "assistant", "content": r1.text},
        {"role": "user", "content": "What's my name?"}
    ]
)

Function Calling

from tracia import ToolDefinition, ToolParameters, JsonSchemaProperty

# Define a tool
tools = [
    ToolDefinition(
        name="get_weather",
        description="Get the current weather",
        parameters=ToolParameters(
            properties={
                "location": JsonSchemaProperty(
                    type="string",
                    description="City name"
                )
            },
            required=["location"]
        )
    )
]

result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=tools
)

if result.tool_calls:
    for call in result.tool_calls:
        print(f"Tool: {call.name}, Args: {call.arguments}")

Variable Interpolation

result = client.run_local(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant named {{name}}."},
        {"role": "user", "content": "Hello!"}
    ],
    variables={"name": "Claude"}
)

Prompts API

# List all prompts
prompts = client.prompts.list()

# Get a specific prompt
prompt = client.prompts.get("my-prompt")

# Run a prompt template
result = client.prompts.run(
    "my-prompt",
    variables={"name": "World"}
)

Spans API

from tracia import Eval, EvaluateOptions

# List spans
spans = client.spans.list()

# Evaluate a span
client.spans.evaluate(
    "sp_xxx",
    EvaluateOptions(
        evaluator="quality",
        value=Eval.POSITIVE,  # or Eval.NEGATIVE
        note="Great response!",
    ),
)

Async Support

All methods have async variants:

import asyncio

async def main():
    async with Tracia(api_key="...") as client:
        result = await client.arun_local(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello!"}]
        )
        print(result.text)

asyncio.run(main())

Supported Providers

Via LiteLLM, Tracia supports 100+ providers including:

  • OpenAI: gpt-4o, gpt-4, gpt-3.5-turbo, o1, o3
  • Anthropic: claude-3-opus, claude-sonnet-4, claude-3-haiku
  • Google: gemini-2.0-flash, gemini-2.5-pro
  • And many more...

Error Handling

from tracia import TraciaError, TraciaErrorCode

try:
    result = client.run_local(...)
except TraciaError as e:
    if e.code == TraciaErrorCode.MISSING_PROVIDER_API_KEY:
        print("Please set your API key")
    elif e.code == TraciaErrorCode.PROVIDER_ERROR:
        print(f"LLM error: {e.message}")

Configuration Options

client = Tracia(
    api_key="...",
    base_url="https://app.tracia.io",  # Custom API URL
    on_span_error=lambda e, span_id: print(f"Span error: {e}")
)

result = client.run_local(
    model="gpt-4o",
    messages=[...],
    temperature=0.7,
    max_output_tokens=1000,
    timeout_ms=30000,
    tags=["production"],
    user_id="user_123",
    session_id="session_456",
    send_trace=True,  # Set to False to disable tracing
)

Learn More

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tracia-0.3.1.tar.gz (33.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tracia-0.3.1-py3-none-any.whl (31.8 kB view details)

Uploaded Python 3

File details

Details for the file tracia-0.3.1.tar.gz.

File metadata

  • Download URL: tracia-0.3.1.tar.gz
  • Upload date:
  • Size: 33.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.3.1.tar.gz
Algorithm Hash digest
SHA256 d00c2e167113d76c47e9700181e2dd2678e65b89fc4c3a216e02653b25aeb464
MD5 026b5d01c0a4919b41139ebbecb10cb3
BLAKE2b-256 0b72c07e08185a844f76814c9157b44cf049c134f6a0ec622cbe59dd59804209

See more details on using hashes here.

File details

Details for the file tracia-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: tracia-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 31.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f97ec0f8999d330a1a439460f34ce2626a5af1e933642c7226c3ec3ece27ab2e
MD5 38b25885c936003a8d060ca26d58d063
BLAKE2b-256 9e5ef47a073ba94ab1cf9ca7492e334d35e87cd68977193db83c3e162e27e205

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page