Skip to main content

Library to easily interface with LLM API providers

Project description

๐Ÿš… LiteLLM

LiteLLM AI Gateway

Open Source AI Gateway for 100+ LLMs. Self-hosted. Enterprise-ready. Call any LLM in OpenAI format.

Deploy to Render Deploy on Railway

LiteLLM Proxy Server (AI Gateway) | Hosted Proxy | Enterprise Tier | Website

PyPI Version GitHub Stars Y Combinator W23 Whatsapp Discord Slack CodSpeed

Group 7154 (1)

Use LiteLLM for

LLMs - Call 100+ LLMs (Python SDK + AI Gateway)

All Supported Endpoints - /chat/completions, /responses, /embeddings, /images, /audio, /batches, /rerank, /a2a, /messages and more.

Python SDK

pip install litellm
from litellm import completion
import os

os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"

# OpenAI
response = completion(model="openai/gpt-4o", messages=[{"role": "user", "content": "Hello!"}])

# Anthropic  
response = completion(model="anthropic/claude-sonnet-4-20250514", messages=[{"role": "user", "content": "Hello!"}])

AI Gateway (Proxy Server)

Getting Started - E2E Tutorial - Setup virtual keys, make your first request

pip install 'litellm[proxy]'
litellm --model gpt-4o
import openai

client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:4000")
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

Docs: LLM Providers

Agents - Invoke A2A Agents (Python SDK + AI Gateway)

Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI

Python SDK - A2A Protocol

from litellm.a2a_protocol import A2AClient
from a2a.types import SendMessageRequest, MessageSendParams
from uuid import uuid4

client = A2AClient(base_url="http://localhost:10001")

request = SendMessageRequest(
    id=str(uuid4()),
    params=MessageSendParams(
        message={
            "role": "user",
            "parts": [{"kind": "text", "text": "Hello!"}],
            "messageId": uuid4().hex,
        }
    )
)
response = await client.send_message(request)

AI Gateway (Proxy Server)

Step 1. Add your Agent to the AI Gateway

Step 2. Call Agent via A2A SDK

from a2a.client import A2ACardResolver, A2AClient
from a2a.types import MessageSendParams, SendMessageRequest
from uuid import uuid4
import httpx

base_url = "http://localhost:4000/a2a/my-agent"  # LiteLLM proxy + agent name
headers = {"Authorization": "Bearer sk-1234"}    # LiteLLM Virtual Key

async with httpx.AsyncClient(headers=headers) as httpx_client:
    resolver = A2ACardResolver(httpx_client=httpx_client, base_url=base_url)
    agent_card = await resolver.get_agent_card()
    client = A2AClient(httpx_client=httpx_client, agent_card=agent_card)

    request = SendMessageRequest(
        id=str(uuid4()),
        params=MessageSendParams(
            message={
                "role": "user",
                "parts": [{"kind": "text", "text": "Hello!"}],
                "messageId": uuid4().hex,
            }
        )
    )
    response = await client.send_message(request)

Docs: A2A Agent Gateway

MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)

Python SDK - MCP Bridge

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from litellm import experimental_mcp_client
import litellm

server_params = StdioServerParameters(command="python", args=["mcp_server.py"])

async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()

        # Load MCP tools in OpenAI format
        tools = await experimental_mcp_client.load_mcp_tools(session=session, format="openai")

        # Use with any LiteLLM model
        response = await litellm.acompletion(
            model="gpt-4o",
            messages=[{"role": "user", "content": "What's 3 + 5?"}],
            tools=tools
        )

AI Gateway - MCP Gateway

Step 1. Add your MCP Server to the AI Gateway

Step 2. Call MCP tools via /chat/completions

curl -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
  -H 'Authorization: Bearer sk-1234' \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Summarize the latest open PR"}],
    "tools": [{
      "type": "mcp",
      "server_url": "litellm_proxy/mcp/github",
      "server_label": "github_mcp",
      "require_approval": "never"
    }]
  }'

Use with Cursor IDE

{
  "mcpServers": {
    "LiteLLM": {
      "url": "http://localhost:4000/mcp/",
      "headers": {
        "x-litellm-api-key": "Bearer sk-1234"
      }
    }
  }
}

Docs: MCP Gateway


How to use LiteLLM

You can use LiteLLM through either the Proxy Server or Python SDK. Both gives you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:

LiteLLM AI Gateway LiteLLM Python SDK
Use Case Central service (LLM Gateway) to access multiple LLMs Use LiteLLM directly in your Python code
Who Uses It? Gen AI Enablement / ML Platform Teams Developers building LLM projects
Key Features Centralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and management Direct Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.)

LiteLLM Performance: 8ms P95 latency at 1k RPS (See benchmarks here)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

OSS Adopters

Stripe image Google ADK Greptile OpenHands

Netflix

OpenAI Agents SDK

Supported Providers (Website Supported Models | Docs)

Provider /chat/completions /messages /responses /embeddings /image/generations /audio/transcriptions /audio/speech /moderations /batches /rerank
Abliteration (abliteration) โœ…
AI/ML API (aiml) โœ… โœ… โœ… โœ… โœ…
AI21 (ai21) โœ… โœ… โœ…
AI21 Chat (ai21_chat) โœ… โœ… โœ…
Aleph Alpha โœ… โœ… โœ…
Amazon Nova โœ… โœ… โœ…
Anthropic (anthropic) โœ… โœ… โœ… โœ…
Anthropic Text (anthropic_text) โœ… โœ… โœ… โœ…
Anyscale โœ… โœ… โœ…
AssemblyAI (assemblyai) โœ… โœ… โœ… โœ…
Auto Router (auto_router) โœ… โœ… โœ…
AWS - Bedrock (bedrock) โœ… โœ… โœ… โœ… โœ…
AWS - Sagemaker (sagemaker) โœ… โœ… โœ… โœ…
Azure (azure) โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ…
Azure AI (azure_ai) โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ…
Azure Text (azure_text) โœ… โœ… โœ… โœ… โœ… โœ… โœ…
Baseten (baseten) โœ… โœ… โœ…
Bytez (bytez) โœ… โœ… โœ…
Cerebras (cerebras) โœ… โœ… โœ…
Clarifai (clarifai) โœ… โœ… โœ…
Cloudflare AI Workers (cloudflare) โœ… โœ… โœ…
Codestral (codestral) โœ… โœ… โœ…
Cohere (cohere) โœ… โœ… โœ… โœ… โœ…
Cohere Chat (cohere_chat) โœ… โœ… โœ…
CometAPI (cometapi) โœ… โœ… โœ… โœ…
CompactifAI (compactifai) โœ… โœ… โœ…
Custom (custom) โœ… โœ… โœ…
Custom OpenAI (custom_openai) โœ… โœ… โœ… โœ… โœ… โœ… โœ…
Dashscope (dashscope) โœ… โœ… โœ…
Databricks (databricks) โœ… โœ… โœ…
DataRobot (datarobot) โœ… โœ… โœ…
Deepgram (deepgram) โœ… โœ… โœ… โœ…
DeepInfra (deepinfra) โœ… โœ… โœ…
Deepseek (deepseek) โœ… โœ… โœ…
ElevenLabs (elevenlabs) โœ… โœ… โœ… โœ… โœ…
Empower (empower) โœ… โœ… โœ…
Fal AI (fal_ai) โœ… โœ… โœ… โœ…
Featherless AI (featherless_ai) โœ… โœ… โœ…
Fireworks AI (fireworks_ai) โœ… โœ… โœ…
FriendliAI (friendliai) โœ… โœ… โœ…
Galadriel (galadriel) โœ… โœ… โœ…
GitHub Copilot (github_copilot) โœ… โœ… โœ… โœ…
GitHub Models (github) โœ… โœ… โœ…
Google - PaLM โœ… โœ… โœ…
Google - Vertex AI (vertex_ai) โœ… โœ… โœ… โœ… โœ…
Google AI Studio - Gemini (gemini) โœ… โœ… โœ…
GradientAI (gradient_ai) โœ… โœ… โœ…
Groq AI (groq) โœ… โœ… โœ…
Heroku (heroku) โœ… โœ… โœ…
Hosted VLLM (hosted_vllm) โœ… โœ… โœ…
Huggingface (huggingface) โœ… โœ… โœ… โœ… โœ…
Hyperbolic (hyperbolic) โœ… โœ… โœ…
IBM - Watsonx.ai (watsonx) โœ… โœ… โœ… โœ…
Infinity (infinity) โœ…
Jina AI (jina_ai) โœ…
Lambda AI (lambda_ai) โœ… โœ… โœ…
Lemonade (lemonade) โœ… โœ… โœ…
LiteLLM Proxy (litellm_proxy) โœ… โœ… โœ… โœ… โœ…
Llamafile (llamafile) โœ… โœ… โœ…
LM Studio (lm_studio) โœ… โœ… โœ…
Maritalk (maritalk) โœ… โœ… โœ…
Meta - Llama API (meta_llama) โœ… โœ… โœ…
Mistral AI API (mistral) โœ… โœ… โœ… โœ…
Moonshot (moonshot) โœ… โœ… โœ…
Morph (morph) โœ… โœ… โœ…
Nebius AI Studio (nebius) โœ… โœ… โœ… โœ…
NLP Cloud (nlp_cloud) โœ… โœ… โœ…
Novita AI (novita) โœ… โœ… โœ…
Nscale (nscale) โœ… โœ… โœ…
Nvidia NIM (nvidia_nim) โœ… โœ… โœ…
OCI (oci) โœ… โœ… โœ…
Ollama (ollama) โœ… โœ… โœ… โœ…
Ollama Chat (ollama_chat) โœ… โœ… โœ…
Oobabooga (oobabooga) โœ… โœ… โœ… โœ… โœ… โœ… โœ…
OpenAI (openai) โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ…
OpenAI-like (openai_like) โœ…
OpenRouter (openrouter) โœ… โœ… โœ…
OVHCloud AI Endpoints (ovhcloud) โœ… โœ… โœ…
Perplexity AI (perplexity) โœ… โœ… โœ…
Petals (petals) โœ… โœ… โœ…
Predibase (predibase) โœ… โœ… โœ…
Recraft (recraft) โœ…
Replicate (replicate) โœ… โœ… โœ…
Sagemaker Chat (sagemaker_chat) โœ… โœ… โœ…
Sambanova (sambanova) โœ… โœ… โœ…
Snowflake (snowflake) โœ… โœ… โœ…
Text Completion Codestral (text-completion-codestral) โœ… โœ… โœ…
Text Completion OpenAI (text-completion-openai) โœ… โœ… โœ… โœ… โœ… โœ… โœ…
Together AI (together_ai) โœ… โœ… โœ…
Topaz (topaz) โœ… โœ… โœ…
Triton (triton) โœ… โœ… โœ…
V0 (v0) โœ… โœ… โœ…
Vercel AI Gateway (vercel_ai_gateway) โœ… โœ… โœ…
VLLM (vllm) โœ… โœ… โœ…
Volcengine (volcengine) โœ… โœ… โœ…
Voyage AI (voyage) โœ…
WandB Inference (wandb) โœ… โœ… โœ…
Watsonx Text (watsonx_text) โœ… โœ… โœ…
xAI (xai) โœ… โœ… โœ…
Xinference (xinference) โœ…

Read the Docs

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant services docker-compose up db prometheus

Backend

  1. (In root) create virtual environment python -m venv .venv
  2. Activate virtual environment source .venv/bin/activate
  3. Install dependencies pip install -e ".[all]"
  4. pip install prisma
  5. prisma generate
  6. Start proxy backend python litellm/proxy/proxy_cli.py

Frontend

  1. Navigate to ui/litellm-dashboard
  2. Install dependencies npm install
  3. Run npm run dev to start the dashboard

Verify Docker Image Signatures

All LiteLLM Docker images published to GHCR are signed with cosign. Every release is signed with the same key introduced in commit 0112e53.

Verify using the pinned commit hash (recommended):

A commit hash is cryptographically immutable, so this is the strongest way to ensure you are using the original signing key:

cosign verify \
  --key https://raw.githubusercontent.com/BerriAI/litellm/0112e53046018d726492c814b3644b7d376029d0/cosign.pub \
  ghcr.io/berriai/litellm:<release-tag>

Verify using a release tag (convenience):

Tags are protected in this repository and resolve to the same key. This option is easier to read but relies on tag protection rules:

cosign verify \
  --key https://raw.githubusercontent.com/BerriAI/litellm/<release-tag>/cosign.pub \
  ghcr.io/berriai/litellm:<release-tag>

Replace <release-tag> with the version you are deploying (e.g. v1.83.0-stable).

Enterprise

For companies that need better security, user management and professional support

Get an Enterprise License Talk to founders

This covers:

  • โœ… Features under the LiteLLM Commercial License:
  • โœ… Feature Prioritization
  • โœ… Custom Integrations
  • โœ… Professional Support - Dedicated discord + slack
  • โœ… Custom SLAs
  • โœ… Secure access with Single Sign-On

Contributing

We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.

Quick Start for Contributors

This requires poetry to be installed.

git clone https://github.com/BerriAI/litellm.git
cd litellm
make install-dev    # Install development dependencies
make format         # Format your code
make lint           # Run all linting checks
make test-unit      # Run unit tests
make format-check   # Check formatting only

For detailed contributing guidelines, see CONTRIBUTING.md.

Code Quality / Linting

LiteLLM follows the Google Python Style Guide.

Our automated checks include:

  • Black for code formatting
  • Ruff for linting and code quality
  • MyPy for type checking
  • Circular import detection
  • Import safety checks

All these checks must pass before your PR can be merged.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litellm-1.83.6.tar.gz (17.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litellm-1.83.6-py3-none-any.whl (16.0 MB view details)

Uploaded Python 3

File details

Details for the file litellm-1.83.6.tar.gz.

File metadata

  • Download URL: litellm-1.83.6.tar.gz
  • Upload date:
  • Size: 17.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litellm-1.83.6.tar.gz
Algorithm Hash digest
SHA256 5e50efd1f38dceaf8ed0c1713f991fbbf534453bf14b10e580610e4c2197365e
MD5 4e84c33f3274ca67b3b1ac5d11ed5b7d
BLAKE2b-256 4a0dcac4923561c1847e6ae78899ec53840b0e018bee27ef70099a00517d2ab7

See more details on using hashes here.

Provenance

The following attestation bundles were made for litellm-1.83.6.tar.gz:

Publisher: publish-litellm.yml on BerriAI/project-releaser

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file litellm-1.83.6-py3-none-any.whl.

File metadata

  • Download URL: litellm-1.83.6-py3-none-any.whl
  • Upload date:
  • Size: 16.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litellm-1.83.6-py3-none-any.whl
Algorithm Hash digest
SHA256 a3d3425136231b7c52d5c881f2f5a1f95df710000674c26f1a82f4ec9bfdebf2
MD5 6a41261769291c733548bae880ea7f4e
BLAKE2b-256 9c64e97def4d7a0434b0d921a6ef6120f417a40e8ca46a81526a85406c2ef297

See more details on using hashes here.

Provenance

The following attestation bundles were made for litellm-1.83.6-py3-none-any.whl:

Publisher: publish-litellm.yml on BerriAI/project-releaser

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page