Skip to main content

OpenTelemetry-native Auto instrumentation library for monitoring LLM Applications and GPUs, facilitating the integration of observability into your GenAI-driven projects

Project description

OpenLIT Logo

OpenTelemetry-native

AI Observability, Evaluation and Guardrails Framework

Documentation | Quickstart | Roadmap | Feature Request | Report a Bug

OpenLIT License Downloads GitHub Last Commit GitHub Contributors

Slack X

OpenLIT Connections Banner

OpenLIT SDK is a monitoring framework built on top of OpenTelemetry that gives your complete Observability for your AI stack, from LLMs to vector databases and GPUs, with just one line of code with tracing and metrics. It also allows you to send the generated traces and metrics to your existing monitoring tools like Grafana, New Relic, and more.

This project proudly follows and maintains the Semantic Conventions with the OpenTelemetry community, consistently updating to align with the latest standards in Observability.

โšก Features

  • ๐Ÿ”Ž Auto Instrumentation: Works with 30+ LLM providers, vector databases, and GPUs with just one line of code.
  • ๐Ÿ”ญ OpenTelemetry-Native Observability SDKs: Vendor-neutral SDKs that can send traces and metrics to your existing observability tool like Prometheus and Jaeger.
  • ๐Ÿ’ฒ Cost Tracking for Custom and Fine-Tuned Models: Pass custom pricing files for accurate budgeting of custom and fine-tuned models.
  • ๐Ÿš€ Suppport for OpenLIT Features: Includes suppprt for prompt management and secrets management features available in OpenLIT.

Auto Instrumentation Capabilities

LLMs Vector DBs Frameworks GPUs
โœ… OpenAI โœ… ChromaDB โœ… Langchain โœ… NVIDIA
โœ… Ollama โœ… Pinecone โœ… LiteLLM โœ… AMD
โœ… Anthropic โœ… Qdrant โœ… LlamaIndex
โœ… GPT4All โœ… Milvus โœ… Haystack
โœ… Cohere โœ… EmbedChain
โœ… Mistral โœ… Guardrails
โœ… Azure OpenAI โœ… CrewAI
โœ… Azure AI Inference โœ… DSPy
โœ… GitHub AI Models โœ… AG2
โœ… HuggingFace Transformers โœ… Dynamiq
โœ… Amazon Bedrock โœ… Phidata
โœ… mem0
โœ… Vertex AI โœ… MultiOn
โœ… Groq
โœ… ElevenLabs
โœ… vLLM
โœ… OLA Krutrim
โœ… Google AI Studio
โœ… NVIDIA NIM
โœ… Titan ML
โœ… Reka AI
โœ… xAI
โœ… Prem AI

Supported Destinations

๐Ÿ’ฟ Installation

pip install openlit

๐Ÿš€ Getting Started with LLM Observability

Step 1: Install OpenLIT SDK

Open your command line or terminal and run:

pip install openlit

Step 2: Initialize OpenLIT in your Application

Integrate OpenLIT into your AI applications by adding the following lines to your code.

import openlit

openlit.init()

Configure the telemetry data destination as follows:

Purpose Parameter/Environment Variable For Sending to OpenLIT
Send data to an HTTP OTLP endpoint otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT "http://127.0.0.1:4318"
Authenticate telemetry backends otlp_headers or OTEL_EXPORTER_OTLP_HEADERS Not required by default

๐Ÿ’ก Info: If the otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT is not provided, the OpenLIT SDK will output traces directly to your console, which is recommended during the development phase.

Example


Initialize using Function Arguments

Add the following two lines to your application code:

import openlit

openlit.init(
  otlp_endpoint="YOUR_OTEL_ENDPOINT", 
  otlp_headers ="YOUR_OTEL_ENDPOINT_AUTH"
)

Initialize using Environment Variables

Add the following two lines to your application code:

import openlit

openlit.init()

Then, configure the your OTLP endpoint using environment variable:

export OTEL_EXPORTER_OTLP_ENDPOINT = "YOUR_OTEL_ENDPOINT"
export OTEL_EXPORTER_OTLP_HEADERS = "YOUR_OTEL_ENDPOINT_AUTH"

Step 3: Visualize and Optimize!

Now that your LLM observability data is being collected and sent to configured OpenTelemetry destination, the next step is to visualize and analyze this data. This will help you understand your LLM application's performance and behavior and identify where it can be improved.

If you want to use OpenLIT's Observability Dashboard to monitor LLM usageโ€”like cost, tokens, and user interactionsโ€”please check out our Quickstart Guide.

If you're sending metrics and traces to other observability tools, take a look at our Connections Guide to start using a pre-built dashboard we have created for these tools.

Configuration

Observability - openlit.init()

Below is a detailed overview of the configuration options available, allowing you to adjust OpenLIT's behavior and functionality to align with your specific observability needs:

Argument Description Default Value Required
environment The deployment environment of the application. "default" Yes
application_name Identifies the name of your application. "default" Yes
tracer An instance of OpenTelemetry Tracer for tracing operations. None No
meter An OpenTelemetry Metrics instance for capturing metrics. None No
otlp_endpoint Specifies the OTLP endpoint for transmitting telemetry data. None No
otlp_headers Defines headers for the OTLP exporter, useful for backends requiring authentication. None No
disable_batch A flag to disable batch span processing, favoring immediate dispatch. False No
trace_content Enables tracing of content for deeper insights. True No
disabled_instrumentors List of instrumentors to disable. None No
disable_metrics If set, disables the collection of metrics. False No
pricing_json URL or file path of the pricing JSON file. https://github.com/openlit/openlit/blob/main/assets/pricing.json No
collect_gpu_stats Flag to enable or disable GPU metrics collection. False No

OpenLIT Prompt Hub - openlit.get_prompt()

Below are the parameters for use with the SDK for OpenLIT Prompt Hub for prompt management:

Parameter Description
url Sets the OpenLIT URL. Defaults to the OPENLIT_URL environment variable.
api_key Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable.
name Sets the name to fetch a unique prompt. Use this or prompt_id.
prompt_id Sets the ID to fetch a unique prompt. Use this or name. Optional
version Set to True to get the prompt with variable substitution.. Optional
shouldCompile Boolean value that compiles the prompt using the provided variables. Optional
variables Sets the variables for prompt compilation. Optional
meta_properties Sets the meta-properties for storing in the prompt's access history metadata. Optional

OpenLIT Vault - openlit.get_secrets()

Below are the parameters for use with the SDK for OpenLIT Vault for secret management:

Parameter Description
url Sets the Openlit URL. Defaults to the OPENLIT_URL environment variable.
api_key Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable.
key Sets the key to fetch a specific secret. Optional
should_set_env Boolean value that sets all the secrets as environment variables for the application. Optional
tags Sets the tags for fetching only the secrets that have the mentioned tags assigned. Optional

๐Ÿ›ฃ๏ธ Roadmap

We are dedicated to continuously improving OpenLIT SDKs. Here's a look at what's been accomplished and what's on the horizon:

Feature Status
OpenTelmetry auto-instrumentation for LLM Providers like OpenAI, Anthropic โœ… Completed
OpenTelmetry auto-instrumentation for Vector databases like Pinecone, Chroma โœ… Completed
OpenTelmetry auto-instrumentation for LLM Frameworks like LangChain, LlamaIndex โœ… Completed
OpenTelemetry-native auto-instrumentation for NVIDIA GPU Monitoring โœ… Completed
Real-Time Guardrails Implementation โœ… Completed
Programmatic Evaluation for LLM Response โœ… Completed
OpenTelmetry auto-instrumentation for Agent Frameworks like CrewAI, DsPy ๐Ÿ”œ Coming Soon

๐ŸŒฑ Contributing

Whether it's big or small, we love contributions ๐Ÿ’š. Check out our Contribution guide to get started

Unsure where to start? Here are a few ways to get involved:

  • Join our Slack or Discord community to discuss ideas, share feedback, and connect with both our team and the wider OpenLIT community.

Your input helps us grow and improve, and we're here to support you every step of the way.

๐Ÿ’š Community & Support

Connect with the OpenLIT community and maintainers for support, discussions, and updates:

  • ๐ŸŒŸ If you like it, Leave a star on our GitHub
  • ๐ŸŒ Join our Slack or Discord community for live interactions and questions.
  • ๐Ÿž Report bugs on our GitHub Issues to help us improve OpenLIT.
  • ๐• Follow us on X for the latest updates and news.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openlit-1.32.0.tar.gz (111.2 kB view details)

Uploaded Source

Built Distribution

openlit-1.32.0-py3-none-any.whl (209.8 kB view details)

Uploaded Python 3

File details

Details for the file openlit-1.32.0.tar.gz.

File metadata

  • Download URL: openlit-1.32.0.tar.gz
  • Upload date:
  • Size: 111.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.7

File hashes

Hashes for openlit-1.32.0.tar.gz
Algorithm Hash digest
SHA256 5f73fed25e58cb1eef3945f5904c9801436e30294e2f1e5ca162df084869fb4e
MD5 5e9758a19c2df319a17ba708019f3e6a
BLAKE2b-256 ca8dd7f199acd0f8e2113ee4167074e9bd8186f485cfa8ed7449bd1c98976f21

See more details on using hashes here.

File details

Details for the file openlit-1.32.0-py3-none-any.whl.

File metadata

  • Download URL: openlit-1.32.0-py3-none-any.whl
  • Upload date:
  • Size: 209.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.7

File hashes

Hashes for openlit-1.32.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a4bc2ff15a3d08fa9950cdb24bc808b10342a66a86693df17db94dca12826170
MD5 6314234b3ca23b3b719b05908f1cebaf
BLAKE2b-256 6317baffeaae3a21a538b66e7711fd59c4f9a184a304d77cd4d2c5919143bd94

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page