Skip to main content

llama-index llms openrouter integration

Project description

LlamaIndex Llms Integration: Openrouter

Installation

To install the required packages, run:

%pip install llama-index-llms-openrouter
!pip install llama-index

Setup

Initialize OpenRouter

You need to set either the environment variable OPENROUTER_API_KEY or pass your API key directly in the class constructor. Replace <your-api-key> with your actual API key:

from llama_index.llms.openrouter import OpenRouter
from llama_index.core.llms import ChatMessage

llm = OpenRouter(
    api_key="<your-api-key>",
    max_tokens=256,
    context_window=4096,
    model="gryphe/mythomax-l2-13b",
)

Generate Chat Responses

You can generate a chat response by sending a list of ChatMessage instances:

message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

Streaming Responses

To stream responses, use the stream_chat method:

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

Complete with Prompt

You can also generate completions with a prompt using the complete method:

resp = llm.complete("Tell me a joke")
print(resp)

Streaming Completion

To stream completions, use the stream_complete method:

resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
    print(r.delta, end="")

Model Configuration

To use a specific model, you can specify it during initialization. For example, to use Mistral's Mixtral model, you can set it like this:

llm = OpenRouter(model="mistralai/mixtral-8x7b-instruct")
resp = llm.complete("Write a story about a dragon who can code in Rust")
print(resp)

Provider Routing (OpenRouter)

OpenRouter supports selecting which upstream providers to prioritize. You can pass these via OpenRouter(..., order=[...], allow_fallbacks=...).

from llama_index.llms.openrouter import OpenRouter

llm = OpenRouter(
    api_key="<your-api-key>",
    model="mistralai/mixtral-8x7b-instruct",
    order=["openai", "together"],
    allow_fallbacks=False,
)

resp = llm.complete("Hello")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/openrouter/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_openrouter-0.5.0.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_openrouter-0.5.0-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_openrouter-0.5.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_openrouter-0.5.0.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_openrouter-0.5.0.tar.gz
Algorithm Hash digest
SHA256 061f654058ec909d2848e4ebe8ebc327387ce470d758d75212df126a2b3ccf9d
MD5 4086caa5c3003af0b341c001b2b90d5f
BLAKE2b-256 d18cc9be3a004c90f339a3418d7e91ba466c1c90435bdacc445606d54ce70d35

See more details on using hashes here.

File details

Details for the file llama_index_llms_openrouter-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_openrouter-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 4.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_openrouter-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7e65bfce3550c5312c1ad313e2797d79983b264df35891cfeffe63b31473be0c
MD5 78318627e2eacd53f58e9f10b055c4b3
BLAKE2b-256 b897d1a4fc12a79992190ebb682c746b125371b94f0c1f17e69356162b16cf3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page