Skip to main content

llama-index llms helicone (OpenAI-compatible) integration

Project description

LlamaIndex LLMs Integration: Helicone

Installation

To install the required packages, run:

pip install llama-index-llms-helicone
pip install llama-index

Setup

Initialize Helicone

Set your Helicone API key via HELICONE_API_KEY (or pass directly). No provider API keys are needed when using the Helicone AI Gateway.

from llama_index.llms.helicone import Helicone
from llama_index.core.llms import ChatMessage

llm = Helicone(
    api_key="<helicone-api-key>",  # or set HELICONE_API_KEY env var
    model="gpt-4o-mini",  # works across providers via gateway
)

Generate Chat Responses

You can generate a chat response by sending a list of ChatMessage instances:

message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

Streaming Responses

To stream responses, use the stream_chat method:

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

Complete with Prompt

You can also generate completions with a prompt using the complete method:

resp = llm.complete("Tell me a joke")
print(resp)

Streaming Completion

To stream completions, use the stream_complete method:

resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
    print(r.delta, end="")

Model Configuration

To use a specific model, you can specify it during initialization. For example, to use Mistral's Mixtral model, you can set it like this:

from llama_index.llms.helicone import Helicone

llm = Helicone(model="gpt-4o-mini")
resp = llm.complete("Write a story about a dragon who can code in Rust")
print(resp)

Notes

  • Default Helicone base URL is https://ai-gateway.helicone.ai/v1. Override with api_base or HELICONE_API_BASE if needed.
  • Only HELICONE_API_KEY is required. The gateway routes to the correct provider based on the model string.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_helicone-0.1.0.tar.gz (4.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_helicone-0.1.0-py3-none-any.whl (4.8 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_helicone-0.1.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_helicone-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9614a62db38c2a7079eef171c6ca5b2f9bf8150b5aed5068e46083271a6c7603
MD5 0dacdf2b8199f3526724cdba99412db4
BLAKE2b-256 d88e3ca6d27432e9c2de0d805a79ceb270aa9a67927c0b108f61bd58d374faf1

See more details on using hashes here.

File details

Details for the file llama_index_llms_helicone-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_helicone-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f777f4c9de4be7fc57d775f84cf493cf6b79e1230522535fd31260a469384e95
MD5 f6413f6213e987463403e11be3811683
BLAKE2b-256 11e364ea80b25e38ee6eb9598a7a97eb55c29bc2713cf8a1503a3cc2e188c587

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page