Skip to main content

LLM prompt management and tracing SDK

Project description

Tracia

LLM prompt management and tracing SDK for Python

PyPI version Python 3.10+ License: MIT

What is Tracia?

Tracia is a modern LLM prompt management and tracing platform. This Python SDK provides:

  • Unified LLM Access - Call OpenAI, Anthropic, Google, and 100+ providers through a single interface (powered by LiteLLM)
  • Automatic Tracing - Every LLM call is automatically traced with latency, token usage, and cost
  • Prompt Management - Store, version, and manage your prompts in the cloud
  • Session Linking - Easily link related calls for multi-turn conversations

Installation

pip install tracia

You'll also need API keys for the LLM providers you want to use:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

Quick Start

from tracia import Tracia

# Initialize the client
client = Tracia(api_key="your_tracia_api_key")

# Run a local prompt
result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(result.text)
print(f"Tokens: {result.usage.total_tokens}")

Streaming

# Stream the response
stream = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk, end="", flush=True)

# Get the final result (stream.result is a Future[StreamResult])
final = stream.result.result()
print(f"\nTotal tokens: {final.usage.total_tokens}")

Multi-turn Conversations with Sessions

# Create a session for linked conversations
session = client.create_session()

# First message
r1 = session.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "My name is Alice"}]
)

# Follow-up - automatically linked to the same trace
r2 = session.run_local(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "My name is Alice"},
        {"role": "assistant", "content": r1.text},
        {"role": "user", "content": "What's my name?"}
    ]
)

Function Calling

from tracia import ToolDefinition, ToolParameters, JsonSchemaProperty

# Define a tool
tools = [
    ToolDefinition(
        name="get_weather",
        description="Get the current weather",
        parameters=ToolParameters(
            properties={
                "location": JsonSchemaProperty(
                    type="string",
                    description="City name"
                )
            },
            required=["location"]
        )
    )
]

result = client.run_local(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=tools
)

if result.tool_calls:
    for call in result.tool_calls:
        print(f"Tool: {call.name}, Args: {call.arguments}")

Variable Interpolation

result = client.run_local(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant named {{name}}."},
        {"role": "user", "content": "Hello!"}
    ],
    variables={"name": "Claude"}
)

Prompts API

# List all prompts
prompts = client.prompts.list()

# Get a specific prompt
prompt = client.prompts.get("my-prompt")

# Run a prompt template
result = client.prompts.run(
    "my-prompt",
    variables={"name": "World"}
)

Spans API

from tracia import Eval, EvaluateOptions

# List spans
spans = client.spans.list()

# Evaluate a span
client.spans.evaluate(
    "sp_xxx",
    EvaluateOptions(
        evaluator="quality",
        value=Eval.POSITIVE,  # or Eval.NEGATIVE
        note="Great response!",
    ),
)

Async Support

All methods have async variants:

import asyncio

async def main():
    async with Tracia(api_key="...") as client:
        result = await client.arun_local(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello!"}]
        )
        print(result.text)

asyncio.run(main())

Supported Providers

Via LiteLLM, Tracia supports 100+ providers including:

  • OpenAI: gpt-4o, gpt-4, gpt-3.5-turbo, o1, o3
  • Anthropic: claude-3-opus, claude-sonnet-4, claude-3-haiku
  • Google: gemini-2.0-flash, gemini-2.5-pro
  • And many more...

Error Handling

from tracia import TraciaError, TraciaErrorCode

try:
    result = client.run_local(...)
except TraciaError as e:
    if e.code == TraciaErrorCode.MISSING_PROVIDER_API_KEY:
        print("Please set your API key")
    elif e.code == TraciaErrorCode.PROVIDER_ERROR:
        print(f"LLM error: {e.message}")

Configuration Options

client = Tracia(
    api_key="...",
    base_url="https://app.tracia.io",  # Custom API URL
    on_span_error=lambda e, span_id: print(f"Span error: {e}")
)

result = client.run_local(
    model="gpt-4o",
    messages=[...],
    temperature=0.7,
    max_output_tokens=1000,
    timeout_ms=30000,
    tags=["production"],
    user_id="user_123",
    session_id="session_456",
    send_trace=True,  # Set to False to disable tracing
)

Learn More

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tracia-0.1.0.tar.gz (30.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tracia-0.1.0-py3-none-any.whl (28.5 kB view details)

Uploaded Python 3

File details

Details for the file tracia-0.1.0.tar.gz.

File metadata

  • Download URL: tracia-0.1.0.tar.gz
  • Upload date:
  • Size: 30.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1bdaa852d1c7b3dfa6b0a17ff4ca0528efe8c22c4a4f37a3a44d65112fa89187
MD5 74ee2f1c294b15fadc3709606994b1cd
BLAKE2b-256 bb56ed15051dfbc8e94acaec01a7a7e9a7909b7bcb6874adbb22f07c9105e5b3

See more details on using hashes here.

File details

Details for the file tracia-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: tracia-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 28.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for tracia-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f061116746eb3fd8b17b825a380150515b8ef6e807c8935a0b30a212dbd91867
MD5 afa69cc0083ed3c73f013a3181816456
BLAKE2b-256 dd494ab64cb593363b6f1d9a7f219c9a6e2af0f03c230bdd7c612a2a83d5e6c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page