Skip to main content

LLM prompt management and tracing SDK

Project description

Tracia

LLM prompt management and tracing SDK for Python

PyPI version Python 3.10+ License: MIT

What is Tracia?

Tracia is a modern LLM prompt management and tracing platform. This Python SDK provides:

  • Unified LLM Access - Call OpenAI, Anthropic, Google, and 100+ providers through a single interface (powered by LiteLLM)
  • Automatic Tracing - Every LLM call is automatically traced with latency, token usage, and cost
  • Prompt Management - Store, version, and manage your prompts in the cloud
  • Session Linking - Easily link related calls for multi-turn conversations

Installation

pip install tracia

You'll also need API keys for the LLM providers you want to use:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

Quick Start

from tracia import Tracia

# Initialize the client
client = Tracia(api_key="your_tracia_api_key")

# Run a local prompt
result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(result.text)
print(f"Tokens: {result.usage.total_tokens}")

Streaming

# Stream the response
stream = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk, end="", flush=True)

# Get the final result (stream.result is a Future[StreamResult])
final = stream.result.result()
print(f"\nTotal tokens: {final.usage.total_tokens}")

Multi-turn Conversations with Sessions

# Create a session for linked conversations
session = client.create_session()

# First message
r1 = session.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "My name is Alice"}]
)

# Follow-up - automatically linked to the same trace
r2 = session.run_local(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "My name is Alice"},
        {"role": "assistant", "content": r1.text},
        {"role": "user", "content": "What's my name?"}
    ]
)

Function Calling

from tracia import ToolDefinition, ToolParameters, JsonSchemaProperty

# Define a tool
tools = [
    ToolDefinition(
        name="get_weather",
        description="Get the current weather",
        parameters=ToolParameters(
            properties={
                "location": JsonSchemaProperty(
                    type="string",
                    description="City name"
                )
            },
            required=["location"]
        )
    )
]

result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=tools
)

if result.tool_calls:
    for call in result.tool_calls:
        print(f"Tool: {call.name}, Args: {call.arguments}")

Variable Interpolation

result = client.run_local(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant named {{name}}."},
        {"role": "user", "content": "Hello!"}
    ],
    variables={"name": "Claude"}
)

Prompts API

# List all prompts
prompts = client.prompts.list()

# Get a specific prompt
prompt = client.prompts.get("my-prompt")

# Run a prompt template
result = client.prompts.run(
    "my-prompt",
    variables={"name": "World"}
)

Spans API

from tracia import Eval, EvaluateOptions

# List spans
spans = client.spans.list()

# Evaluate a span
client.spans.evaluate(
    "sp_xxx",
    EvaluateOptions(
        evaluator="quality",
        value=Eval.POSITIVE,  # or Eval.NEGATIVE
        note="Great response!",
    ),
)

Async Support

All methods have async variants:

import asyncio

async def main():
    async with Tracia(api_key="...") as client:
        result = await client.arun_local(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello!"}]
        )
        print(result.text)

asyncio.run(main())

Supported Providers

Via LiteLLM, Tracia supports 100+ providers including:

  • OpenAI: gpt-4o, gpt-4, gpt-3.5-turbo, o1, o3
  • Anthropic: claude-3-opus, claude-sonnet-4, claude-3-haiku
  • Google: gemini-2.0-flash, gemini-2.5-pro
  • And many more...

Error Handling

from tracia import TraciaError, TraciaErrorCode

try:
    result = client.run_local(...)
except TraciaError as e:
    if e.code == TraciaErrorCode.MISSING_PROVIDER_API_KEY:
        print("Please set your API key")
    elif e.code == TraciaErrorCode.PROVIDER_ERROR:
        print(f"LLM error: {e.message}")

Configuration Options

client = Tracia(
    api_key="...",
    base_url="https://app.tracia.io",  # Custom API URL
    on_span_error=lambda e, span_id: print(f"Span error: {e}")
)

result = client.run_local(
    model="gpt-4o",
    messages=[...],
    temperature=0.7,
    max_output_tokens=1000,
    timeout_ms=30000,
    tags=["production"],
    user_id="user_123",
    session_id="session_456",
    send_trace=True,  # Set to False to disable tracing
)

Learn More

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tracia-0.2.1.tar.gz (30.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tracia-0.2.1-py3-none-any.whl (29.2 kB view details)

Uploaded Python 3

File details

Details for the file tracia-0.2.1.tar.gz.

File metadata

  • Download URL: tracia-0.2.1.tar.gz
  • Upload date:
  • Size: 30.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.2.1.tar.gz
Algorithm Hash digest
SHA256 c319c109977f07eaecd896b99c50a04df79160643cfc948a8bdf4714e149557e
MD5 40dbf3ad4ccd7d3c115a87cf43a90c58
BLAKE2b-256 c19772efbdfdd84aea0e022eee04d8e7805cacf6a08955e2de112b5d1d795cfc

See more details on using hashes here.

File details

Details for the file tracia-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: tracia-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 29.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0d58c69b12f252a7d1b01c0040e6331b0f0afa42f14bb89f5c2df4403273c1c6
MD5 fa8013a841912daa02d2968ebe7f2168
BLAKE2b-256 9cb59abce6464a53a6400176e43b34877b355931be0b91b3aeadc7ee25c99cd3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page