Skip to main content

llama-index llms openllm integration

Project description

LlamaIndex LLM Integration: OpenLLM

Installation

To install the required packages, run:

%pip install llama-index-llms-openllm
!pip install llama-index

Setup

Initialize OpenLLM

First, import the necessary libraries and set up your OpenLLM instance. Replace my-model, https://hostname.com/v1, and na with your model name, API base URL, and API key, respectively:

import os
from typing import List, Optional
from llama_index.llms.openllm import OpenLLM
from llama_index.core.llms import ChatMessage

llm = OpenLLM(
    model="my-model", api_base="https://hostname.com/v1", api_key="na"
)

Generate Completions

To generate a completion, use the complete method:

completion_response = llm.complete("To infinity, and")
print(completion_response)

Stream Completions

You can also stream completions using the stream_complete method:

async for it in llm.stream_complete(
    "The meaning of time is", max_new_tokens=128
):
    print(it, end="", flush=True)

Chat Functionality

OpenLLM supports chat APIs, allowing you to handle conversation-like interactions. Here’s how to use it:

Synchronous Chat

You can perform a synchronous chat by constructing a list of ChatMessage instances:

from llama_index.core.llms import ChatMessage

chat_messages = [
    ChatMessage(role="system", content="You are acting as Ernest Hemmingway."),
    ChatMessage(role="user", content="Hi there!"),
    ChatMessage(role="assistant", content="Yes?"),
    ChatMessage(role="user", content="What is the meaning of life?"),
]

for it in llm.chat(chat_messages):
    print(it.message.content, flush=True, end="")

Asynchronous Chat

To perform an asynchronous chat, use the astream_chat method:

async for it in llm.astream_chat(chat_messages):
    print(it.message.content, flush=True, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/openllm/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_openllm-0.4.2.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_openllm-0.4.2-py3-none-any.whl (3.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_openllm-0.4.2.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_openllm-0.4.2.tar.gz
Algorithm Hash digest
SHA256 9197d21a8135e244b02cee99298fbcda96d52a7e89496f21a48e0bc32145bf75
MD5 0d08614047eb077d344bd0341d1bd45a
BLAKE2b-256 db63e3c8b13d45487fde5f154786507e1cf1e040ed2b15b13f8e6afa9cc1606e

See more details on using hashes here.

File details

Details for the file llama_index_llms_openllm-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_openllm-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9406f6c77bd2460ba5cd4d54c08b226b3486a52406da65d5eb8bde61495638a2
MD5 6b1f02358578d6af7655ece3c38d6a5f
BLAKE2b-256 bb607145d7ce0e0556b0de6d5ecc4722af0110751b606fb530c3229a4bff9d9b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page