llama-index llms helicone (OpenAI-compatible) integration
Project description
LlamaIndex LLMs Integration: Helicone
Installation
To install the required packages, run:
pip install llama-index-llms-helicone
pip install llama-index
Setup
Initialize Helicone
Set your Helicone API key via HELICONE_API_KEY (or pass directly). No provider API keys are needed when using the Helicone AI Gateway.
from llama_index.llms.helicone import Helicone
from llama_index.core.llms import ChatMessage
llm = Helicone(
api_key="<helicone-api-key>", # or set HELICONE_API_KEY env var
model="gpt-4o-mini", # works across providers via gateway
)
Generate Chat Responses
You can generate a chat response by sending a list of ChatMessage instances:
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)
Streaming Responses
To stream responses, use the stream_chat method:
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
Complete with Prompt
You can also generate completions with a prompt using the complete method:
resp = llm.complete("Tell me a joke")
print(resp)
Streaming Completion
To stream completions, use the stream_complete method:
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")
Model Configuration
To use a specific model, you can specify it during initialization. For example, to use Mistral's Mixtral model, you can set it like this:
from llama_index.llms.helicone import Helicone
llm = Helicone(model="gpt-4o-mini")
resp = llm.complete("Write a story about a dragon who can code in Rust")
print(resp)
Notes
- Default Helicone base URL is
https://ai-gateway.helicone.ai/v1. Override withapi_baseorHELICONE_API_BASEif needed. - Only
HELICONE_API_KEYis required. The gateway routes to the correct provider based on themodelstring.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_index_llms_helicone-0.2.0.tar.gz.
File metadata
- Download URL: llama_index_llms_helicone-0.2.0.tar.gz
- Upload date:
- Size: 5.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
38179f4bf08b8557f83e17e835cef0c34434ded846ca860f9fa293c833460a44
|
|
| MD5 |
72e7b2c1e08d05c3365c802a0b6ae445
|
|
| BLAKE2b-256 |
fafe330d871eb989772b5df7c2b0e302590c77ba17a49ba5ce60a5a1f54873f9
|
File details
Details for the file llama_index_llms_helicone-0.2.0-py3-none-any.whl.
File metadata
- Download URL: llama_index_llms_helicone-0.2.0-py3-none-any.whl
- Upload date:
- Size: 4.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a75d80053e80aa7ca8de10a61bd34ddbfe796170875115a3bed1af031439c06
|
|
| MD5 |
c9b65a4a9635449fdc57f33f83c3f4d4
|
|
| BLAKE2b-256 |
26919cddfad8ca6de0e1fa80872d5666fe58f69b007b8691bfc635dde652b8ed
|