Skip to main content

LLM prompt management and tracing SDK

Project description

Tracia

LLM prompt management and tracing SDK for Python

PyPI version Python 3.10+ License: MIT

What is Tracia?

Tracia is a modern LLM prompt management and tracing platform. This Python SDK provides:

  • Unified LLM Access - Call OpenAI, Anthropic, Google, and 100+ providers through a single interface (powered by LiteLLM)
  • Automatic Tracing - Every LLM call is automatically traced with latency, token usage, and cost
  • Prompt Management - Store, version, and manage your prompts in the cloud
  • Session Linking - Easily link related calls for multi-turn conversations

Installation

pip install tracia

You'll also need API keys for the LLM providers you want to use:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

Quick Start

from tracia import Tracia

# Initialize the client
client = Tracia(api_key="your_tracia_api_key")

# Run a local prompt
result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(result.text)
print(f"Tokens: {result.usage.total_tokens}")

Streaming

# Stream the response
stream = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk, end="", flush=True)

# Get the final result (stream.result is a Future[StreamResult])
final = stream.result.result()
print(f"\nTotal tokens: {final.usage.total_tokens}")

Multi-turn Conversations with Sessions

# Create a session for linked conversations
session = client.create_session()

# First message
r1 = session.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "My name is Alice"}]
)

# Follow-up - automatically linked to the same trace
r2 = session.run_local(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "My name is Alice"},
        {"role": "assistant", "content": r1.text},
        {"role": "user", "content": "What's my name?"}
    ]
)

Function Calling

from tracia import ToolDefinition, ToolParameters, JsonSchemaProperty

# Define a tool
tools = [
    ToolDefinition(
        name="get_weather",
        description="Get the current weather",
        parameters=ToolParameters(
            properties={
                "location": JsonSchemaProperty(
                    type="string",
                    description="City name"
                )
            },
            required=["location"]
        )
    )
]

result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=tools
)

if result.tool_calls:
    for call in result.tool_calls:
        print(f"Tool: {call.name}, Args: {call.arguments}")

Variable Interpolation

result = client.run_local(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant named {{name}}."},
        {"role": "user", "content": "Hello!"}
    ],
    variables={"name": "Claude"}
)

Prompts API

# List all prompts
prompts = client.prompts.list()

# Get a specific prompt
prompt = client.prompts.get("my-prompt")

# Run a prompt template
result = client.prompts.run(
    "my-prompt",
    variables={"name": "World"}
)

Spans API

from tracia import Eval, EvaluateOptions

# List spans
spans = client.spans.list()

# Evaluate a span
client.spans.evaluate(
    "sp_xxx",
    EvaluateOptions(
        evaluator="quality",
        value=Eval.POSITIVE,  # or Eval.NEGATIVE
        note="Great response!",
    ),
)

Async Support

All methods have async variants:

import asyncio

async def main():
    async with Tracia(api_key="...") as client:
        result = await client.arun_local(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello!"}]
        )
        print(result.text)

asyncio.run(main())

Supported Providers

Via LiteLLM, Tracia supports 100+ providers including:

  • OpenAI: gpt-4o, gpt-4, gpt-3.5-turbo, o1, o3
  • Anthropic: claude-3-opus, claude-sonnet-4, claude-3-haiku
  • Google: gemini-2.0-flash, gemini-2.5-pro
  • And many more...

Error Handling

from tracia import TraciaError, TraciaErrorCode

try:
    result = client.run_local(...)
except TraciaError as e:
    if e.code == TraciaErrorCode.MISSING_PROVIDER_API_KEY:
        print("Please set your API key")
    elif e.code == TraciaErrorCode.PROVIDER_ERROR:
        print(f"LLM error: {e.message}")

Configuration Options

client = Tracia(
    api_key="...",
    base_url="https://app.tracia.io",  # Custom API URL
    on_span_error=lambda e, span_id: print(f"Span error: {e}")
)

result = client.run_local(
    model="gpt-4o",
    messages=[...],
    temperature=0.7,
    max_output_tokens=1000,
    timeout_ms=30000,
    tags=["production"],
    user_id="user_123",
    session_id="session_456",
    send_trace=True,  # Set to False to disable tracing
)

Learn More

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tracia-0.3.0.tar.gz (33.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tracia-0.3.0-py3-none-any.whl (31.6 kB view details)

Uploaded Python 3

File details

Details for the file tracia-0.3.0.tar.gz.

File metadata

  • Download URL: tracia-0.3.0.tar.gz
  • Upload date:
  • Size: 33.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.3.0.tar.gz
Algorithm Hash digest
SHA256 671f6c0473bb8e8b5ce19edb2054a53c4178cfa0276d61186c53b48f0abadcfe
MD5 8bcf7b35a005f86a18aaff7fffb842a4
BLAKE2b-256 c5771cbef4c97e0e4f8ebc8c46dd754807fdbdeb20450966f8355b73398563fe

See more details on using hashes here.

File details

Details for the file tracia-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: tracia-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 31.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d28754b58b337de911ecf39db752a14b4faf4f0db8ab47eba2f4dc37a3419792
MD5 5c57b91ed644ee3ae2677d676e78eb67
BLAKE2b-256 9b3083fca703361e1b58ee6a055ef290bec8d7c04f4789d190e11ba081406473

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page