Skip to main content

Library to easily interface with LLM API providers

Project description

๐Ÿš… LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

Bug Report ยท Feature Request

PyPI Version CircleCI Y Combinator W23

Docs 100+ Supported Models Demo Video

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

๐Ÿšจ Seeing errors? Chat on WhatsApp Chat on Discord

05/10/2023: LiteLLM is adopting Semantic Versioning for all commits. Learn more

Usage

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

OpenAI Proxy Server (Docs)

Spin up a local server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai โœ… โœ… โœ… โœ…
cohere โœ… โœ… โœ… โœ…
anthropic โœ… โœ… โœ… โœ…
replicate โœ… โœ… โœ… โœ…
huggingface โœ… โœ… โœ… โœ…
together_ai โœ… โœ… โœ… โœ…
openrouter โœ… โœ… โœ… โœ…
vertex_ai โœ… โœ… โœ… โœ…
palm โœ… โœ… โœ… โœ…
ai21 โœ… โœ… โœ… โœ…
baseten โœ… โœ… โœ… โœ…
azure โœ… โœ… โœ… โœ…
sagemaker โœ… โœ… โœ… โœ…
bedrock โœ… โœ… โœ… โœ…
vllm โœ… โœ… โœ… โœ…
nlp_cloud โœ… โœ… โœ… โœ…
aleph alpha โœ… โœ… โœ… โœ…
petals โœ… โœ… โœ… โœ…
ollama โœ… โœ… โœ… โœ…
deepinfra โœ… โœ… โœ… โœ…

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! ๐Ÿš€

  • push your fork to your GitHub repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litellm-0.8.3.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litellm-0.8.3-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file litellm-0.8.3.tar.gz.

File metadata

  • Download URL: litellm-0.8.3.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.8.3.tar.gz
Algorithm Hash digest
SHA256 24ebd9a2bb16e0b2563b16cef7a9623fadb66d4584928e30a001569d5602e43a
MD5 da5d27f64c24b089c22c1cc1b1d9dac9
BLAKE2b-256 fd2b6316ec8658970770f55ba8c2dd34ee7541182601e88145137568bf5e6ba5

See more details on using hashes here.

File details

Details for the file litellm-0.8.3-py3-none-any.whl.

File metadata

  • Download URL: litellm-0.8.3-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.8.3-py3-none-any.whl
Algorithm Hash digest
SHA256 fcc62ad3f82b3bf725334df2279d10d9ed87d1a28012073ae662f856652fb6b8
MD5 698d9b54b752174db810491dc7173ad2
BLAKE2b-256 9e3cd4b9da0bc0c4adb3736dcc43c13b99da8cdfc732ca7460e659b2bcc8fcca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page