Skip to main content

Python SDK for LangTrace

Project description

Langtrace Python SDK

Open Source & Open Telemetry(OTEL) Observability for LLM Applications

Static Badge Static Badge Static Badge Downloads Deploy


๐Ÿ“š Table of Contents

Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.

โœจ Features

  • ๐Ÿ“Š Open Telemetry Support: Built on OTEL standards for comprehensive tracing
  • ๐Ÿ”„ Real-time Monitoring: Track LLM API calls, vector operations, and framework usage
  • ๐ŸŽฏ Performance Insights: Analyze latency, costs, and usage patterns
  • ๐Ÿ” Debug Tools: Trace and debug your LLM application workflows
  • ๐Ÿ“ˆ Analytics: Get detailed metrics and visualizations
  • ๐Ÿ› ๏ธ Framework Support: Extensive integration with popular LLM frameworks
  • ๐Ÿ”Œ Vector DB Integration: Support for major vector databases
  • ๐ŸŽจ Flexible Configuration: Customizable tracing and monitoring options

๐Ÿš€ Quick Start

pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<your_api_key>') # Get your API key at langtrace.ai

๐Ÿ”— Supported Integrations

Langtrace automatically captures traces from the following vendors:

LLM Providers

Provider TypeScript SDK Python SDK
OpenAI โœ… โœ…
Anthropic โœ… โœ…
Azure OpenAI โœ… โœ…
Cohere โœ… โœ…
Groq โœ… โœ…
Perplexity โœ… โœ…
Gemini โŒ โœ…
Mistral โŒ โœ…
AWS Bedrock โœ… โœ…
Ollama โŒ โœ…
Cerebras โŒ โœ…

Frameworks

Framework TypeScript SDK Python SDK
Langchain โŒ โœ…
LlamaIndex โœ… โœ…
Langgraph โŒ โœ…
LiteLLM โŒ โœ…
DSPy โŒ โœ…
CrewAI โŒ โœ…
VertexAI โœ… โœ…
EmbedChain โŒ โœ…
Autogen โŒ โœ…
HiveAgent โŒ โœ…
Inspect AI โŒ โœ…

Vector Databases

Database TypeScript SDK Python SDK
Pinecone โœ… โœ…
ChromaDB โœ… โœ…
QDrant โœ… โœ…
Weaviate โœ… โœ…
PGVector โœ… โœ… (SQLAlchemy)
MongoDB โŒ โœ…
Milvus โŒ โœ…

๐ŸŒ Getting Started

Langtrace Cloud โ˜๏ธ

  1. Sign up by going to this link.
  2. Create a new Project after signing up. Projects are containers for storing traces and metrics generated by your application. If you have only one application, creating 1 project will do.
  3. Generate an API key by going inside the project.
  4. In your application, install the Langtrace SDK and initialize it with the API key you generated in the step 3.
  5. The code for installing and setting up the SDK is shown below

Framework Quick Starts

FastAPI

from fastapi import FastAPI
from langtrace_python_sdk import langtrace
from openai import OpenAI

langtrace.init()
app = FastAPI()
client = OpenAI()

@app.get("/")
def root():
    client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Say this is a test"}],
        stream=False,
    )
    return {"Hello": "World"}

Django

# settings.py
from langtrace_python_sdk import langtrace
langtrace.init()

# views.py
from django.http import JsonResponse
from openai import OpenAI

client = OpenAI()

def chat_view(request):
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": request.GET.get('message', '')}]
    )
    return JsonResponse({"response": response.choices[0].message.content})

Flask

from flask import Flask
from langtrace_python_sdk import langtrace
from openai import OpenAI

app = Flask(__name__)
langtrace.init()
client = OpenAI()

@app.route('/')
def chat():
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    return {"response": response.choices[0].message.content}

LangChain

from langtrace_python_sdk import langtrace
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

langtrace.init()

# LangChain operations are automatically traced
chat = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])
chain = prompt | chat
response = chain.invoke({"input": "Hello!"})

LlamaIndex

from langtrace_python_sdk import langtrace
from llama_index import VectorStoreIndex, SimpleDirectoryReader

langtrace.init()

# Document loading and indexing are automatically traced
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)

# Queries are traced with metadata
query_engine = index.as_query_engine()
response = query_engine.query("What's in the documents?")

DSPy

from langtrace_python_sdk import langtrace
import dspy
from dspy.teleprompt import BootstrapFewShot

langtrace.init()

# DSPy operations are automatically traced
lm = dspy.OpenAI(model="gpt-4")
dspy.settings.configure(lm=lm)

class SimpleQA(dspy.Signature):
    """Answer questions with short responses."""
    question = dspy.InputField()
    answer = dspy.OutputField(desc="short answer")

compiler = BootstrapFewShot(metric=dspy.metrics.Answer())
program = compiler.compile(SimpleQA)

CrewAI

from langtrace_python_sdk import langtrace
from crewai import Agent, Task, Crew

langtrace.init()

# Agents and tasks are automatically traced
researcher = Agent(
    role="Researcher",
    goal="Research and analyze data",
    backstory="Expert data researcher",
    allow_delegation=False
)

task = Task(
    description="Analyze market trends",
    agent=researcher
)

crew = Crew(
    agents=[researcher],
    tasks=[task]
)

result = crew.kickoff()

For more detailed examples and framework-specific features, visit our documentation.

โš™๏ธ Configuration

Initialize Options

The SDK can be initialized with various configuration options to customize its behavior:

langtrace.init(
    api_key: Optional[str] = None,          # API key for authentication
    batch: bool = True,                     # Enable/disable batch processing
    write_spans_to_console: bool = False,   # Console logging
    custom_remote_exporter: Optional[Any] = None,  # Custom exporter
    api_host: Optional[str] = None,         # Custom API host
    disable_instrumentations: Optional[Dict] = None,  # Disable specific integrations
    service_name: Optional[str] = None,     # Custom service name
    disable_logging: bool = False,          # Disable all logging
    headers: Dict[str, str] = {},           # Custom headers
)

Configuration Details

Parameter Type Default Value Description
api_key str LANGTRACE_API_KEY or None The API key for authentication. Can be set via environment variable
batch bool True Whether to batch spans before sending them to reduce API calls
write_spans_to_console bool False Enable console logging for debugging purposes
custom_remote_exporter Optional[Exporter] None Custom exporter for sending traces to your own backend
api_host Optional[str] https://langtrace.ai/ Custom API endpoint for self-hosted deployments
disable_instrumentations Optional[Dict] None Disable specific vendor instrumentations (e.g., {'only': ['openai']})
service_name Optional[str] None Custom service name for trace identification
disable_logging bool False Disable SDK logging completely
headers Dict[str, str] {} Custom headers for API requests

Environment Variables

Configure Langtrace behavior using these environment variables:

Variable Description Default Impact
LANGTRACE_API_KEY Primary authentication method Required* Required if not passed to init()
TRACE_PROMPT_COMPLETION_DATA Control prompt/completion tracing true Set to 'false' to opt out of prompt/completion data collection
TRACE_DSPY_CHECKPOINT Control DSPy checkpoint tracing true Set to 'false' to disable checkpoint tracing
LANGTRACE_ERROR_REPORTING Control error reporting true Set to 'false' to disable Sentry error reporting
LANGTRACE_API_HOST Custom API endpoint https://langtrace.ai/ Override default API endpoint for self-hosted deployments

Performance Note: Setting TRACE_DSPY_CHECKPOINT=false is recommended in production environments as checkpoint tracing involves state serialization which can impact latency.

Security Note: When TRACE_PROMPT_COMPLETION_DATA=false, no prompt or completion data will be collected, ensuring sensitive information remains private.

๐Ÿ”ง Advanced Features

Root Span Decorator

Use the root span decorator to create custom trace hierarchies:

from langtrace_python_sdk import langtrace

@langtrace.with_langtrace_root_span(name="custom_operation")
def my_function():
    # Your code here
    pass

Additional Attributes

Inject custom attributes into your traces:

# Using decorator
@langtrace.with_additional_attributes({"custom_key": "custom_value"})
def my_function():
    pass

# Using context manager
with langtrace.inject_additional_attributes({"custom_key": "custom_value"}):
    # Your code here
    pass

Prompt Registry

Register and manage prompts for better traceability:

from langtrace_python_sdk import langtrace

# Register a prompt template
langtrace.register_prompt("greeting", "Hello, {name}!")

# Use registered prompt
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": langtrace.get_prompt("greeting", name="Alice")}]
)

User Feedback System

Collect and analyze user feedback:

from langtrace_python_sdk import langtrace

# Record user feedback for a trace
langtrace.record_feedback(
    trace_id="your_trace_id",
    rating=5,
    feedback_text="Great response!",
    metadata={"user_id": "123"}
)

DSPy Checkpointing

Manage DSPy checkpoints for workflow tracking:

from langtrace_python_sdk import langtrace

# Enable checkpoint tracing (disabled by default in production)
langtrace.init(
    api_key="your_api_key",
    dspy_checkpoint_tracing=True
)

Vector Database Operations

Track vector database operations:

from langtrace_python_sdk import langtrace

# Vector operations are automatically traced
with langtrace.inject_additional_attributes({"operation_type": "similarity_search"}):
    results = vector_db.similarity_search("query", k=5)

For more detailed examples and use cases, visit our documentation.

๐Ÿ“ Examples

๐Ÿ  Langtrace Self Hosted

Get started with self-hosted Langtrace:

from langtrace_python_sdk import langtrace
langtrace.init(write_spans_to_console=True)  # For console logging
# OR
langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)  # For custom exporter

๐Ÿค Contributing

We welcome contributions! To get started:

  1. Fork this repository and start developing
  2. Join our Discord workspace
  3. Run examples:
    # In run_example.py, set ENABLED_EXAMPLES flag to True for desired example
    python src/run_example.py
    
  4. Run tests:
    pip install '.[test]' && pip install '.[dev]'
    pytest -v
    

๐Ÿ”’ Security

To report security vulnerabilities, email us at security@scale3labs.com. You can read more on security here.

โ“ Frequently Asked Questions

๐Ÿ“œ License

Langtrace Python SDK is licensed under the Apache 2.0 License. You can read about this license here.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langtrace_python_sdk-3.3.9.tar.gz (6.0 MB view details)

Uploaded Source

Built Distribution

langtrace_python_sdk-3.3.9-py3-none-any.whl (6.1 MB view details)

Uploaded Python 3

File details

Details for the file langtrace_python_sdk-3.3.9.tar.gz.

File metadata

  • Download URL: langtrace_python_sdk-3.3.9.tar.gz
  • Upload date:
  • Size: 6.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for langtrace_python_sdk-3.3.9.tar.gz
Algorithm Hash digest
SHA256 b0e2fdd5244e09e9127e46c2db1f39ca100eb74917d16452d9940f0291fec4e1
MD5 24a8621fecd91c1f2406b25ccbb3f009
BLAKE2b-256 682656b51c3b26934c6b714aaef8eab7d5d59fbdd92b8cd32dcc4ff6b14ee04c

See more details on using hashes here.

File details

Details for the file langtrace_python_sdk-3.3.9-py3-none-any.whl.

File metadata

File hashes

Hashes for langtrace_python_sdk-3.3.9-py3-none-any.whl
Algorithm Hash digest
SHA256 c7211fc026a4c6f0c8c61a66f29d1f84f3ef62756428ff6fc7c600a61ed4ce77
MD5 d97997ac2869255782f3ecb628d06dc5
BLAKE2b-256 fb68626319b834e2c90e7ab5d2ddfe5e4dc89431489d7467b6f7fd09372f3394

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page