Python SDK for LangTrace
Project description
Langtrace
Open Source & Open Telemetry(OTEL) Observability for LLM applications
Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.
Open Telemetry Support
The traces generated by Langtrace adhere to Open Telemetry Standards(OTEL). We are developing semantic conventions for the traces generated by this project. You can checkout the current definitions in this repository. Note: This is an ongoing development and we encourage you to get involved and welcome your feedback.
Langtrace Cloud ☁️
To use the managed SaaS version of Langtrace, follow the steps below:
- Sign up by going to this link.
- Create a new Project after signing up. Projects are containers for storing traces and metrics generated by your application. If you have only one application, creating 1 project will do.
- Generate an API key by going inside the project.
- In your application, install the Langtrace SDK and initialize it with the API key you generated in the step 3.
- The code for installing and setting up the SDK is shown below
Getting Started
Get started by adding simply three lines to your code!
pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init(api_key=<your_api_key>)
OR
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init() # LANGTRACE_API_KEY as an ENVIRONMENT variable
FastAPI Quick Start
Initialize FastAPI project and add this inside the main.py
file
from fastapi import FastAPI
from langtrace_python_sdk import langtrace
from openai import OpenAI
langtrace.init()
app = FastAPI()
client = OpenAI()
@app.get("/")
def root():
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return {"Hello": "World"}
Django Quick Start
Initialize django project and add this inside the __init.py__
file
from langtrace_python_sdk import langtrace
from openai import OpenAI
langtrace.init()
client = OpenAI()
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
Flask Quick Start
Initialize flask project and this inside app.py
file
from flask import Flask
from langtrace_python_sdk import langtrace
from openai import OpenAI
langtrace.init()
client = OpenAI()
app = Flask(__name__)
@app.route("/")
def main():
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return "Hello, World!"
Langtrace Self Hosted
Get started by adding simply two lines to your code and see traces being logged to the console!
pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init(write_spans_to_console=True)
Langtrace self hosted custom exporter
Get started by adding simply three lines to your code and see traces being exported to your remote location!
pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)
Configure Langtrace
Parameter | Type | Default Value | Description |
---|---|---|---|
api_key |
str |
LANGTRACE_API_KEY or None |
The API key for authentication. |
batch |
bool |
True |
Whether to batch spans before sending them. |
write_spans_to_console |
bool |
False |
Whether to write spans to the console. |
custom_remote_exporter |
Optional[Exporter] |
None |
Custom remote exporter. If None , a default LangTraceExporter will be used. |
api_host |
Optional[str] |
https://langtrace.ai/ |
The API host for the remote exporter. |
disable_instrumentations |
Optional[DisableInstrumentations] |
None |
You can pass an object to disable instrumentation for specific vendors ex: {'only': ['openai']} or {'all_except': ['openai']} |
Error Reporting to Langtrace
By default all sdk errors are reported to langtrace via Sentry. This can be disabled by setting the following enviroment variable to False
like so LANGTRACE_ERROR_REPORTING=False
Additional Customization
@with_langtrace_root_span
- this decorator is designed to organize and relate different spans, in a hierarchical manner. When you're performing multiple operations that you want to monitor together as a unit, this function helps by establishing a "parent" (LangtraceRootSpan
or whatever is passed toname
) span. Then, any calls to the LLM APIs made within the given function (fn) will be considered "children" of this parent span. This setup is especially useful for tracking the performance or behavior of a group of operations collectively, rather than individually.
from langtrace_python_sdk import with_langtrace_root_span
@with_langtrace_root_span()
def example():
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return response
inject_additional_attributes
- this function is designed to enhance the traces by adding custom attributes to the current context. These custom attributes provide extra details about the operations being performed, making it easier to analyze and understand their behavior.
from langtrace_python_sdk import inject_additional_attributes
def do_llm_stuff(name=""):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return response
def main():
response = inject_additional_attributes(lambda: do_llm_stuff(name="llm"), {'user.id': 'userId'})
# if the function do not take arguments then this syntax will work
response = inject_additional_attributes(do_llm_stuff, {'user.id': 'userId'})
with_additional_attributes
- is behaving the same asinject_additional_attributes
but as a decorator, this will be deprecated soon.
from langtrace_python_sdk import with_langtrace_root_span, with_additional_attributes
@with_additional_attributes({"user.id": "1234"})
def api_call1():
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return response
@with_additional_attributes({"user.id": "5678"})
def api_call2():
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test three times"}],
stream=False,
)
return response
@with_langtrace_root_span()
def chat_completion():
api_call1()
api_call2()
get_prompt_from_registry
- this function is designed to fetch the desired prompt from thePrompt Registry
. You can pass two options for filteringprompt_version
&variables
.
from langtrace_python_sdk import get_prompt_from_registry
prompt = get_prompt_from_registry(<Registry ID>, options={"prompt_version": 1, "variables": {"foo": "bar"} })
Opt out of tracing prompt and completion data
By default, prompt and completion data are captured. If you would like to opt out of it, set the following env var,
TRACE_PROMPT_COMPLETION_DATA=false
Enable/Disable checkpoint tracing for DSPy
By default, checkpoints are traced for DSPy pipelines. If you would like to disable it, set the following env var,
TRACE_DSPY_CHECKPOINT=false
Note: Checkpoint tracing will increase the latency of executions as the state is serialized. Please disable it in production.
Supported integrations
Langtrace automatically captures traces from the following vendors:
Vendor | Type | Typescript SDK | Python SDK |
---|---|---|---|
OpenAI | LLM | :white_check_mark: | :white_check_mark: |
Anthropic | LLM | :white_check_mark: | :white_check_mark: |
Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
Cohere | LLM | :white_check_mark: | :white_check_mark: |
Groq | LLM | :x: | :white_check_mark: |
Perplexity | LLM | :white_check_mark: | :white_check_mark: |
Gemini | LLM | :x: | :white_check_mark: |
Mistral | LLM | :x: | :white_check_mark: |
Langchain | Framework | :x: | :white_check_mark: |
Langgraph | Framework | :x: | :white_check_mark: |
LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
AWS Bedrock | Framework | :white_check_mark: | :white_check_mark: |
LiteLLM | Framework | :x: | :white_check_mark: |
DSPy | Framework | :x: | :white_check_mark: |
CrewAI | Framework | :x: | :white_check_mark: |
Ollama | Framework | :x: | :white_check_mark: |
VertexAI | Framework | :x: | :white_check_mark: |
Vercel AI SDK | Framework | :white_check_mark: | :x: |
EmbedChain | Framework | :x: | :white_check_mark: |
Autogen | Framework | :x: | :white_check_mark: |
Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
Feature Requests and Issues
- To request for features, head over here to start a discussion.
- To raise an issue, head over here and create an issue.
Contributions
We welcome contributions to this project. To get started, fork this repository and start developing. To get involved, join our Discord workspace.
-
If you want to run any of the examples go to
run_example.py
file, you will findENABLED_EXAMPLES
. choose the example you want to run and just toggle the flag toTrue
and run the file usingpython src/run_example.py
-
If you want to run tests, make sure to install dev & test dependencies:
pip install '.[test]' && pip install '.[dev]'
then run
pytest
using:pytest -v
Security
To report security vulnerabilites, email us at security@scale3labs.com. You can read more on security here.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file langtrace_python_sdk-3.3.4.tar.gz
.
File metadata
- Download URL: langtrace_python_sdk-3.3.4.tar.gz
- Upload date:
- Size: 6.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 990bc680aaaa6dcfd65c8a40f8978f197d29b9ffeec564e51b1e04f1896122a9 |
|
MD5 | 6e98f2e49746252886b368a160721010 |
|
BLAKE2b-256 | 6dc4aec449b1b25917a4bac84b5d122ad69e33f0752d46075fceb77868cd7326 |
File details
Details for the file langtrace_python_sdk-3.3.4-py3-none-any.whl
.
File metadata
- Download URL: langtrace_python_sdk-3.3.4-py3-none-any.whl
- Upload date:
- Size: 6.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 23fdacdf74e90cf943b73adaddcd950079c06cfb79b5306b44becb6ca20c7b79 |
|
MD5 | 49cddfb5a3286d970a63a31ad7a64ac3 |
|
BLAKE2b-256 | bdc6b902816494aaed0d028f9bf7ab3a9a5293c34ab23b025f056daf0a95dffe |