Skip to main content

llama-index llms litellm integration

Project description

LlamaIndex Llms Integration: Litellm

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-litellm
    !pip install llama-index
    

Usage

Import Required Libraries

import os
from llama_index.llms.litellm import LiteLLM
from llama_index.core.llms import ChatMessage

Set Up Environment Variables

Set your API keys as environment variables:

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["COHERE_API_KEY"] = "your-api-key"

Example: OpenAI Call

To interact with the OpenAI model:

message = ChatMessage(role="user", content="Hey! how's it going?")
llm = LiteLLM("gpt-3.5-turbo")
chat_response = llm.chat([message])
print(chat_response)

Example: Cohere Call

To interact with the Cohere model:

llm = LiteLLM("command-nightly")
chat_response = llm.chat([message])
print(chat_response)

Example: Chat with System Message

To have a chat with a system role:

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = LiteLLM("gpt-3.5-turbo").chat(messages)
print(resp)

Streaming Responses

To use the streaming feature with stream_complete:

llm = LiteLLM("gpt-3.5-turbo")
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

Streaming Chat Example

To stream chat messages:

llm = LiteLLM("gpt-3.5-turbo")
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

Asynchronous Example

For asynchronous calls, use:

llm = LiteLLM("gpt-3.5-turbo")
resp = await llm.acomplete("Paul Graham is ")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/litellm/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_litellm-0.3.0.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_litellm-0.3.0-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_litellm-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_litellm-0.3.0.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_llms_litellm-0.3.0.tar.gz
Algorithm Hash digest
SHA256 06535d11e0ecf347342641aba1de80f06b521f8086614a71a5569e41371fb630
MD5 85fa625e9111b840bf81947b4a2a294c
BLAKE2b-256 6ba61f82167194a27ac4aefa7314c7bcebc87478eed81caca604e1030985b11b

See more details on using hashes here.

File details

Details for the file llama_index_llms_litellm-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_litellm-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b1422cf66d4e9f1e02256de53333ce3cbf46bf57a4c37c2eccbf47500f57ee0e
MD5 1dd2b07803f18db9142185cfd6769c1d
BLAKE2b-256 fac5ba74b47af20283d838fd76a31664e6dfaaf424b05c8194f869c2ed2b441b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page