Skip to main content

OpenInference liteLLM Instrumentation

Project description

OpenInference LiteLLM Instrumentation

LiteLLM allows developers to call all LLM APIs using the openAI format. LiteLLM Proxy is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this auto-instrumentation.

This package implements OpenInference tracing for the following LiteLLM functions:

  • completion()
  • acompletion()
  • completion_with_retries()
  • embedding()
  • aembedding()
  • image_generation()
  • aimage_generation()

These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as Arize Phoenix.

Installation

pip install openinference-instrumentation-litellm

Quickstart

In a notebook environment (jupyter, colab, etc.) install openinference-instrumentation-litellm if you haven't already as well as arize-phoenix and litellm.

pip install openinference-instrumentation-litellm arize-phoenix litellm

First, import dependencies required to autoinstrument liteLLM and set up phoenix as an collector for OpenInference traces.

import litellm
import phoenix as px

from openinference.instrumentation.litellm import LiteLLMInstrumentor

from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

Next, we'll start a phoenix server and set it as a collector.

session = px.launch_app()
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

Set up any API keys needed in you API calls. For example:

import os
os.environ["OPENAI_API_KEY"] = "PASTE_YOUR_API_KEY_HERE"

Instrumenting LiteLLM is simple:

LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)

Now, all calls to LiteLLM functions are instrumented and can be viewed in the phoenix UI.

completion_response = litellm.completion(model="gpt-3.5-turbo", 
                   messages=[{"content": "What's the capital of China?", "role": "user"}])
print(completion_response)
acompletion_response = await litellm.acompletion(
            model="gpt-3.5-turbo",
            messages=[{ "content": "Hello, I want to bake a cake","role": "user"},
                      { "content": "Hello, I can pull up some recipes for cakes.","role": "assistant"},
                      { "content": "No actually I want to make a pie","role": "user"},],
            temperature=0.7,
            max_tokens=20
        )
print(acompletion_response)
embedding_response = litellm.embedding(model='text-embedding-ada-002', input=["good morning!"])
print(embedding_response)
image_gen_response = litellm.image_generation(model='dall-e-2', prompt="cute baby otter")
print(image_gen_response)

You can also uninstrument the functions as follows

LiteLLMInstrumentor().uninstrument(tracer_provider=tracer_provider)

Now any liteLLM function calls you make will not send traces to Phoenix until instrumented again

More Info

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file openinference_instrumentation_litellm-0.1.5.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_litellm-0.1.5.tar.gz
Algorithm Hash digest
SHA256 e83ced233aa736ab8becb4c0ebfe31b698523e09ed45d5ee0f99a00d53ea32fc
MD5 241c6b3a650fa4c0a21a4cade5164fdc
BLAKE2b-256 ec9a2d32a99a77e6a283b4169c7a1cfece81210f5cc2111737a6ea68a32aecac

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_litellm-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_litellm-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0fb44fbad5b3e6f989e4cf7140aa60111a7a66295b545b733ab4a0b00b41188d
MD5 a04d44f283a1646fe38f354e06af1266
BLAKE2b-256 e16c36d183493e48230870bb5dc5287ddbaa881df6b0bebb8ccd1a318a6438ab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page