Skip to main content

Library to easily interface with LLM API providers

Project description

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version CircleCI Y Combinator W23

Docs Discord 100+ Supported Models
LiteLLM manages - Translating inputs to the provider's completion and embedding endpoints - Guarantees [consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']` - Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab

By default we provide a free $10 key to try all providers supported on LiteLLM. Try it now 👇

pip install litellm
from litellm import completion
import os

## We provide a free $10 key to try all providers supported on LiteLLM.
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

OpenAI Proxy Server

Spin up a local server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your github repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litellm-0.1.780.tar.gz (853.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litellm-0.1.780-py3-none-any.whl (899.8 kB view details)

Uploaded Python 3

File details

Details for the file litellm-0.1.780.tar.gz.

File metadata

  • Download URL: litellm-0.1.780.tar.gz
  • Upload date:
  • Size: 853.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.1.780.tar.gz
Algorithm Hash digest
SHA256 ca429f658c8477d6a9e25a62004c55edaec8e67d265ebd55cb28667bf07eb1dc
MD5 8afabda35e02394bd3eb5d725bde9b7b
BLAKE2b-256 e12c3b1618700d1da5b0c82f873b91476a20406bbfceaae32d7ae54b2b91a01e

See more details on using hashes here.

File details

Details for the file litellm-0.1.780-py3-none-any.whl.

File metadata

  • Download URL: litellm-0.1.780-py3-none-any.whl
  • Upload date:
  • Size: 899.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.1.780-py3-none-any.whl
Algorithm Hash digest
SHA256 d924560dfac21e35442fed403755cff73b89288b416e614dee7dc56bdb45c257
MD5 5074b9802e8c2e0d0aded124279e4391
BLAKE2b-256 6fd3a87531c70e4b6c6477c91fa4ac9f1b6b2dc219fd530e1bf02ae1f11241dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page