OpenTelemetry instrumentation for LiteLLM
Project description
Litellm OpenTelemetry Integration
Overview
This integration provides support for using OpenTelemetry with the Litellm framework. It enables tracing and monitoring of applications built with Litellm.
Installation
- Install traceAI Litellm
pip install pip install traceAI-litellm
Set Environment Variables
Set up your environment variables to authenticate with FutureAGI
import os
os.environ["FI_API_KEY"] = FI_API_KEY
os.environ["FI_SECRET_KEY"] = FI_SECRET_KEY
Quickstart
Register Tracer Provider
Set up the trace provider to establish the observability pipeline. The trace provider:
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="litellm_app"
)
Configure Litellm Instrumentation
Instrument the Litellm client to enable telemetry collection. This step ensures that all interactions with the Litellm SDK are tracked and monitored.
from traceai_litellm import LiteLLMInstrumentor
LiteLLMInstrumentor().instrument(tracer_provider=trace_provider)
Create Litellm Components
Set up your Litellm client with built-in observability.
import asyncio
import litellm
async def run_examples():
# Simple single message completion call
litellm.completion(
model="gpt-3.5-turbo",
messages=[{"content": "What's the capital of China?", "role": "user"}],
)
# Multiple message conversation completion call with added param
litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"content": "Hello, I want to bake a cake", "role": "user"},
{
"content": "Hello, I can pull up some recipes for cakes.",
"role": "assistant",
},
{"content": "No actually I want to make a pie", "role": "user"},
],
temperature=0.7,
)
# Multiple message conversation acompletion call with added params
await litellm.acompletion(
model="gpt-3.5-turbo",
messages=[
{"content": "Hello, I want to bake a cake", "role": "user"},
{
"content": "Hello, I can pull up some recipes for cakes.",
"role": "assistant",
},
{"content": "No actually I want to make a pie", "role": "user"},
],
temperature=0.7,
max_tokens=20,
)
# Completion with retries
litellm.completion_with_retries(
model="gpt-3.5-turbo",
messages=[{"content": "What's the highest grossing film ever", "role": "user"}],
)
# Embedding call
litellm.embedding(
model="text-embedding-ada-002", input=["good morning from litellm"]
)
# Asynchronous embedding call
await litellm.aembedding(
model="text-embedding-ada-002", input=["good morning from litellm"]
)
# Image generation call
litellm.image_generation(model="dall-e-2", prompt="cute baby otter")
# Asynchronous image generation call
await litellm.aimage_generation(model="dall-e-2", prompt="cute baby otter")
asyncio.run(run_examples())
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file traceai_litellm-0.1.7.tar.gz
.
File metadata
- Download URL: traceai_litellm-0.1.7.tar.gz
- Upload date:
- Size: 7.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.0.0 CPython/3.13.0 Darwin/24.1.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
4f7080ae97f241c487dc3c3c045743df627588186852bd396e2512a7b9200be6
|
|
MD5 |
cd69a46379307d0ec1a6d59fc043be57
|
|
BLAKE2b-256 |
b3345f6169430c408bef340a068b14138cf18dd7128cef0de6652320b2867858
|
File details
Details for the file traceai_litellm-0.1.7-py3-none-any.whl
.
File metadata
- Download URL: traceai_litellm-0.1.7-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.0.0 CPython/3.13.0 Darwin/24.1.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
7db5447a8f9060ddd148a3cb0e663e056ad4a1c7a6edb76630f50c8d1cba0a51
|
|
MD5 |
b7adc13596dae7bd11a07e69ff757a37
|
|
BLAKE2b-256 |
254e92ca413fcc4c9fe9013f1ad976bc38804f4bdf94035deb9d47cfefbabd2e
|