Skip to main content

llama-index llms opea integration

Project description

LlamaIndex Llms Integration: OPEA LLM

OPEA (Open Platform for Enterprise AI) is a platform for building, deploying, and scaling AI applications. As part of this platform, many core gen-ai components are available for deployment as microservices, including LLMs.

Visit https://opea.dev for more information, and their GitHub for the source code of the OPEA components.

Installation

  1. Install the required Python packages:
%pip install llama-index-llms-opea

Usage

from llama_index.core.llms import ChatMessage
from llama_index.llms.opea import OPEA

llm = OPEA(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    api_base="http://localhost:8080/v1",
    temperature=0.7,
    max_tokens=256,
    additional_kwargs={"top_p": 0.95},
)

# Complete a prompt
response = llm.complete("What is the capital of France?")
print(response)

# Stream a chat response
response = llm.stream_chat(
    [ChatMessage(role="user", content="What is the capital of France?")]
)
for chunk in response:
    print(chunk.delta, end="", flush=True)

All available methods include:

  • complete()
  • stream_complete()
  • chat()
  • stream_chat()

as well as async versions of the methods with the a prefix.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_opea-0.2.0.tar.gz (3.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_opea-0.2.0-py3-none-any.whl (3.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_opea-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_opea-0.2.0.tar.gz
Algorithm Hash digest
SHA256 145c1e6d27316b007198fe20a7270bcbad1549a1201ce397229fbf5421b29c72
MD5 ed16ae76792c43575b5bcf786dfc497c
BLAKE2b-256 f728f765efb0a4b57ca7314d9e92bfd9000b0837e85aa0b886131741216e6308

See more details on using hashes here.

File details

Details for the file llama_index_llms_opea-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_opea-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f2907817e0f204259cebcb257373e96f7fef3644eb3bb7341e0b7b8119ada7d0
MD5 9eeebad1c704fdf2fb5d1440d392a88b
BLAKE2b-256 e47fa217637988d33d87b17cfeeda39e4c2dcd9d48455e2581a58c9b5e843fb8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page