Skip to main content

llama-index llms openllm integration

Project description

LlamaIndex LLM Integration: OpenLLM

Installation

To install the required packages, run:

%pip install llama-index-llms-openllm
!pip install llama-index

Setup

Initialize OpenLLM

First, import the necessary libraries and set up your OpenLLM instance. Replace my-model, https://hostname.com/v1, and na with your model name, API base URL, and API key, respectively:

import os
from typing import List, Optional
from llama_index.llms.openllm import OpenLLM
from llama_index.core.llms import ChatMessage

llm = OpenLLM(
    model="my-model", api_base="https://hostname.com/v1", api_key="na"
)

Generate Completions

To generate a completion, use the complete method:

completion_response = llm.complete("To infinity, and")
print(completion_response)

Stream Completions

You can also stream completions using the stream_complete method:

async for it in llm.stream_complete(
    "The meaning of time is", max_new_tokens=128
):
    print(it, end="", flush=True)

Chat Functionality

OpenLLM supports chat APIs, allowing you to handle conversation-like interactions. Here’s how to use it:

Synchronous Chat

You can perform a synchronous chat by constructing a list of ChatMessage instances:

from llama_index.core.llms import ChatMessage

chat_messages = [
    ChatMessage(role="system", content="You are acting as Ernest Hemmingway."),
    ChatMessage(role="user", content="Hi there!"),
    ChatMessage(role="assistant", content="Yes?"),
    ChatMessage(role="user", content="What is the meaning of life?"),
]

for it in llm.chat(chat_messages):
    print(it.message.content, flush=True, end="")

Asynchronous Chat

To perform an asynchronous chat, use the astream_chat method:

async for it in llm.astream_chat(chat_messages):
    print(it.message.content, flush=True, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/openllm/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_openllm-0.4.0.tar.gz (2.5 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_openllm-0.4.0-py3-none-any.whl (2.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_openllm-0.4.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_openllm-0.4.0.tar.gz
  • Upload date:
  • Size: 2.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_llms_openllm-0.4.0.tar.gz
Algorithm Hash digest
SHA256 133955f3ed7c80c34766c39b9b084b04b18616c5c70e6d75ba604d668eadfc20
MD5 17bf156a2dba73aca693083786cb928b
BLAKE2b-256 94a52cc1ce926918f0a1f05c842fe2fd2d712a1c09f1e4860f9267d716b6afd7

See more details on using hashes here.

File details

Details for the file llama_index_llms_openllm-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_openllm-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7f4772670c7950218e6341141937078e2dddf4fa321a302e3598467d75f621a2
MD5 d58a2ae8acafee9f0d9450b9591f7b2e
BLAKE2b-256 09192f5bc90a41b23bb377d9836cf793c0fd7233bf59eb054688e87790e786a4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page