Skip to main content

Library to easily interface with LLM API providers

Project description

๐Ÿš… LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version CircleCI Y Combinator W23

Docs Discord 100+ Supported Models

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Caching (Docs)

LiteLLM supports caching completion() and embedding() calls for all LLMs. Hosted Cache LiteLLM API

import litellm
from litellm.caching import Cache
import os

litellm.cache = Cache()
os.environ['OPENAI_API_KEY'] = ""
# add to cache
response1 = litellm.completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "why is LiteLLM amazing?"}], 
    caching=True
)
# returns cached response
response2 = litellm.completion(
    model="gpt-3.5-turbo", 
    messages=[{"role": "user", "content": "why is LiteLLM amazing?"}], 
    caching=True
)

print(f"response1: {response1}")
print(f"response2: {response2}")

OpenAI Proxy Server (Docs)

Spin up a local server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai โœ… โœ… โœ… โœ…
cohere โœ… โœ… โœ… โœ…
anthropic โœ… โœ… โœ… โœ…
replicate โœ… โœ… โœ… โœ…
huggingface โœ… โœ… โœ… โœ…
together_ai โœ… โœ… โœ… โœ…
openrouter โœ… โœ… โœ… โœ…
vertex_ai โœ… โœ… โœ… โœ…
palm โœ… โœ… โœ… โœ…
ai21 โœ… โœ… โœ… โœ…
baseten โœ… โœ… โœ… โœ…
azure โœ… โœ… โœ… โœ…
sagemaker โœ… โœ… โœ… โœ…
bedrock โœ… โœ… โœ… โœ…
vllm โœ… โœ… โœ… โœ…
nlp_cloud โœ… โœ… โœ… โœ…
aleph alpha โœ… โœ… โœ… โœ…
petals โœ… โœ… โœ… โœ…
ollama โœ… โœ… โœ… โœ…
deepinfra โœ… โœ… โœ… โœ…

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! ๐Ÿš€

  • push your fork to your github repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litellm-0.1.814.tar.gz (1.3 MB view hashes)

Uploaded Source

Built Distribution

litellm-0.1.814-py3-none-any.whl (1.3 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page