Python SDK for LangTrace
Project description
๐ Table of Contents
- โจ Features
- ๐ Quick Start
- ๐ Integrations
- ๐ Getting Started
- โ๏ธ Configuration
- ๐ง Advanced Features
- ๐ Examples
- ๐ Self Hosting
- ๐ค Contributing
- ๐ Security
- โ FAQ
- ๐ License
Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.
โจ Features
- ๐ Open Telemetry Support: Built on OTEL standards for comprehensive tracing
- ๐ Real-time Monitoring: Track LLM API calls, vector operations, and framework usage
- ๐ฏ Performance Insights: Analyze latency, costs, and usage patterns
- ๐ Debug Tools: Trace and debug your LLM application workflows
- ๐ Analytics: Get detailed metrics and visualizations
- ๐ ๏ธ Framework Support: Extensive integration with popular LLM frameworks
- ๐ Vector DB Integration: Support for major vector databases
- ๐จ Flexible Configuration: Customizable tracing and monitoring options
๐ Quick Start
pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<your_api_key>') # Get your API key at langtrace.ai
๐ Supported Integrations
Langtrace automatically captures traces from the following vendors:
LLM Providers
Provider | TypeScript SDK | Python SDK |
---|---|---|
OpenAI | โ | โ |
Anthropic | โ | โ |
Azure OpenAI | โ | โ |
Cohere | โ | โ |
Groq | โ | โ |
Perplexity | โ | โ |
Gemini | โ | โ |
Mistral | โ | โ |
AWS Bedrock | โ | โ |
Ollama | โ | โ |
Cerebras | โ | โ |
Frameworks
Framework | TypeScript SDK | Python SDK |
---|---|---|
Langchain | โ | โ |
LlamaIndex | โ | โ |
Langgraph | โ | โ |
LiteLLM | โ | โ |
DSPy | โ | โ |
CrewAI | โ | โ |
VertexAI | โ | โ |
EmbedChain | โ | โ |
Autogen | โ | โ |
HiveAgent | โ | โ |
Inspect AI | โ | โ |
Vector Databases
Database | TypeScript SDK | Python SDK |
---|---|---|
Pinecone | โ | โ |
ChromaDB | โ | โ |
QDrant | โ | โ |
Weaviate | โ | โ |
PGVector | โ | โ (SQLAlchemy) |
MongoDB | โ | โ |
Milvus | โ | โ |
๐ Getting Started
Langtrace Cloud โ๏ธ
- Sign up by going to this link.
- Create a new Project after signing up. Projects are containers for storing traces and metrics generated by your application. If you have only one application, creating 1 project will do.
- Generate an API key by going inside the project.
- In your application, install the Langtrace SDK and initialize it with the API key you generated in the step 3.
- The code for installing and setting up the SDK is shown below
Framework Quick Starts
FastAPI
from fastapi import FastAPI
from langtrace_python_sdk import langtrace
from openai import OpenAI
langtrace.init()
app = FastAPI()
client = OpenAI()
@app.get("/")
def root():
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=False,
)
return {"Hello": "World"}
Django
# settings.py
from langtrace_python_sdk import langtrace
langtrace.init()
# views.py
from django.http import JsonResponse
from openai import OpenAI
client = OpenAI()
def chat_view(request):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": request.GET.get('message', '')}]
)
return JsonResponse({"response": response.choices[0].message.content})
Flask
from flask import Flask
from langtrace_python_sdk import langtrace
from openai import OpenAI
app = Flask(__name__)
langtrace.init()
client = OpenAI()
@app.route('/')
def chat():
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
return {"response": response.choices[0].message.content}
LangChain
from langtrace_python_sdk import langtrace
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
langtrace.init()
# LangChain operations are automatically traced
chat = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
chain = prompt | chat
response = chain.invoke({"input": "Hello!"})
LlamaIndex
from langtrace_python_sdk import langtrace
from llama_index import VectorStoreIndex, SimpleDirectoryReader
langtrace.init()
# Document loading and indexing are automatically traced
documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
# Queries are traced with metadata
query_engine = index.as_query_engine()
response = query_engine.query("What's in the documents?")
DSPy
from langtrace_python_sdk import langtrace
import dspy
from dspy.teleprompt import BootstrapFewShot
langtrace.init()
# DSPy operations are automatically traced
lm = dspy.OpenAI(model="gpt-4")
dspy.settings.configure(lm=lm)
class SimpleQA(dspy.Signature):
"""Answer questions with short responses."""
question = dspy.InputField()
answer = dspy.OutputField(desc="short answer")
compiler = BootstrapFewShot(metric=dspy.metrics.Answer())
program = compiler.compile(SimpleQA)
CrewAI
from langtrace_python_sdk import langtrace
from crewai import Agent, Task, Crew
langtrace.init()
# Agents and tasks are automatically traced
researcher = Agent(
role="Researcher",
goal="Research and analyze data",
backstory="Expert data researcher",
allow_delegation=False
)
task = Task(
description="Analyze market trends",
agent=researcher
)
crew = Crew(
agents=[researcher],
tasks=[task]
)
result = crew.kickoff()
For more detailed examples and framework-specific features, visit our documentation.
โ๏ธ Configuration
Initialize Options
The SDK can be initialized with various configuration options to customize its behavior:
langtrace.init(
api_key: Optional[str] = None, # API key for authentication
batch: bool = True, # Enable/disable batch processing
write_spans_to_console: bool = False, # Console logging
custom_remote_exporter: Optional[Any] = None, # Custom exporter
api_host: Optional[str] = None, # Custom API host
disable_instrumentations: Optional[Dict] = None, # Disable specific integrations
service_name: Optional[str] = None, # Custom service name
disable_logging: bool = False, # Disable all logging
headers: Dict[str, str] = {}, # Custom headers
)
Configuration Details
Parameter | Type | Default Value | Description |
---|---|---|---|
api_key |
str |
LANGTRACE_API_KEY or None |
The API key for authentication. Can be set via environment variable |
batch |
bool |
True |
Whether to batch spans before sending them to reduce API calls |
write_spans_to_console |
bool |
False |
Enable console logging for debugging purposes |
custom_remote_exporter |
Optional[Exporter] |
None |
Custom exporter for sending traces to your own backend |
api_host |
Optional[str] |
https://langtrace.ai/ |
Custom API endpoint for self-hosted deployments |
disable_instrumentations |
Optional[Dict] |
None |
Disable specific vendor instrumentations (e.g., {'only': ['openai']} ) |
service_name |
Optional[str] |
None |
Custom service name for trace identification |
disable_logging |
bool |
False |
Disable SDK logging completely |
headers |
Dict[str, str] |
{} |
Custom headers for API requests |
Environment Variables
Configure Langtrace behavior using these environment variables:
Variable | Description | Default | Impact |
---|---|---|---|
LANGTRACE_API_KEY |
Primary authentication method | Required* | Required if not passed to init() |
TRACE_PROMPT_COMPLETION_DATA |
Control prompt/completion tracing | true |
Set to 'false' to opt out of prompt/completion data collection |
TRACE_DSPY_CHECKPOINT |
Control DSPy checkpoint tracing | true |
Set to 'false' to disable checkpoint tracing |
LANGTRACE_ERROR_REPORTING |
Control error reporting | true |
Set to 'false' to disable Sentry error reporting |
LANGTRACE_API_HOST |
Custom API endpoint | https://langtrace.ai/ |
Override default API endpoint for self-hosted deployments |
Performance Note: Setting
TRACE_DSPY_CHECKPOINT=false
is recommended in production environments as checkpoint tracing involves state serialization which can impact latency.
Security Note: When
TRACE_PROMPT_COMPLETION_DATA=false
, no prompt or completion data will be collected, ensuring sensitive information remains private.
๐ง Advanced Features
Root Span Decorator
Use the root span decorator to create custom trace hierarchies:
from langtrace_python_sdk import langtrace
@langtrace.with_langtrace_root_span(name="custom_operation")
def my_function():
# Your code here
pass
Additional Attributes
Inject custom attributes into your traces:
# Using decorator
@langtrace.with_additional_attributes({"custom_key": "custom_value"})
def my_function():
pass
# Using context manager
with langtrace.inject_additional_attributes({"custom_key": "custom_value"}):
# Your code here
pass
Prompt Registry
Register and manage prompts for better traceability:
from langtrace_python_sdk import langtrace
# Register a prompt template
langtrace.register_prompt("greeting", "Hello, {name}!")
# Use registered prompt
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": langtrace.get_prompt("greeting", name="Alice")}]
)
User Feedback System
Collect and analyze user feedback:
from langtrace_python_sdk import langtrace
# Record user feedback for a trace
langtrace.record_feedback(
trace_id="your_trace_id",
rating=5,
feedback_text="Great response!",
metadata={"user_id": "123"}
)
DSPy Checkpointing
Manage DSPy checkpoints for workflow tracking:
from langtrace_python_sdk import langtrace
# Enable checkpoint tracing (disabled by default in production)
langtrace.init(
api_key="your_api_key",
dspy_checkpoint_tracing=True
)
Vector Database Operations
Track vector database operations:
from langtrace_python_sdk import langtrace
# Vector operations are automatically traced
with langtrace.inject_additional_attributes({"operation_type": "similarity_search"}):
results = vector_db.similarity_search("query", k=5)
For more detailed examples and use cases, visit our documentation.
๐ Examples
๐ Langtrace Self Hosted
Get started with self-hosted Langtrace:
from langtrace_python_sdk import langtrace
langtrace.init(write_spans_to_console=True) # For console logging
# OR
langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>) # For custom exporter
๐ค Contributing
We welcome contributions! To get started:
- Fork this repository and start developing
- Join our Discord workspace
- Run examples:
# In run_example.py, set ENABLED_EXAMPLES flag to True for desired example python src/run_example.py
- Run tests:
pip install '.[test]' && pip install '.[dev]' pytest -v
๐ Security
To report security vulnerabilities, email us at security@scale3labs.com. You can read more on security here.
โ Frequently Asked Questions
๐ License
Langtrace Python SDK is licensed under the Apache 2.0 License. You can read about this license here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file langtrace_python_sdk-3.3.9.tar.gz
.
File metadata
- Download URL: langtrace_python_sdk-3.3.9.tar.gz
- Upload date:
- Size: 6.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b0e2fdd5244e09e9127e46c2db1f39ca100eb74917d16452d9940f0291fec4e1 |
|
MD5 | 24a8621fecd91c1f2406b25ccbb3f009 |
|
BLAKE2b-256 | 682656b51c3b26934c6b714aaef8eab7d5d59fbdd92b8cd32dcc4ff6b14ee04c |
File details
Details for the file langtrace_python_sdk-3.3.9-py3-none-any.whl
.
File metadata
- Download URL: langtrace_python_sdk-3.3.9-py3-none-any.whl
- Upload date:
- Size: 6.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c7211fc026a4c6f0c8c61a66f29d1f84f3ef62756428ff6fc7c600a61ed4ce77 |
|
MD5 | d97997ac2869255782f3ecb628d06dc5 |
|
BLAKE2b-256 | fb68626319b834e2c90e7ab5d2ddfe5e4dc89431489d7467b6f7fd09372f3394 |