Skip to main content

llama-index llms openrouter integration

Project description

LlamaIndex Llms Integration: Openrouter

Installation

To install the required packages, run:

%pip install llama-index-llms-openrouter
!pip install llama-index

Setup

Initialize OpenRouter

You need to set either the environment variable OPENROUTER_API_KEY or pass your API key directly in the class constructor. Replace <your-api-key> with your actual API key:

from llama_index.llms.openrouter import OpenRouter
from llama_index.core.llms import ChatMessage

llm = OpenRouter(
    api_key="<your-api-key>",
    max_tokens=256,
    context_window=4096,
    model="gryphe/mythomax-l2-13b",
)

Generate Chat Responses

You can generate a chat response by sending a list of ChatMessage instances:

message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

Streaming Responses

To stream responses, use the stream_chat method:

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

Complete with Prompt

You can also generate completions with a prompt using the complete method:

resp = llm.complete("Tell me a joke")
print(resp)

Streaming Completion

To stream completions, use the stream_complete method:

resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
    print(r.delta, end="")

Model Configuration

To use a specific model, you can specify it during initialization. For example, to use Mistral's Mixtral model, you can set it like this:

llm = OpenRouter(model="mistralai/mixtral-8x7b-instruct")
resp = llm.complete("Write a story about a dragon who can code in Rust")
print(resp)

Provider Routing (OpenRouter)

OpenRouter supports selecting which upstream providers to prioritize. You can pass these via OpenRouter(..., order=[...], allow_fallbacks=...).

from llama_index.llms.openrouter import OpenRouter

llm = OpenRouter(
    api_key="<your-api-key>",
    model="mistralai/mixtral-8x7b-instruct",
    order=["openai", "together"],
    allow_fallbacks=False,
)

resp = llm.complete("Hello")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/openrouter/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_openrouter-0.4.4.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_openrouter-0.4.4-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_openrouter-0.4.4.tar.gz.

File metadata

  • Download URL: llama_index_llms_openrouter-0.4.4.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_openrouter-0.4.4.tar.gz
Algorithm Hash digest
SHA256 a3e11fa619e62a03ab1bfe6a08c567dfc9d1618ed935003bc162808dd76e0d96
MD5 1d9b2bfea737b72938a1d55188a93a0c
BLAKE2b-256 f3b611854f76a64b3ce7dc663fb91f290a113d798ab5c210757ff55bf5768122

See more details on using hashes here.

File details

Details for the file llama_index_llms_openrouter-0.4.4-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_openrouter-0.4.4-py3-none-any.whl
  • Upload date:
  • Size: 4.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_openrouter-0.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f7eea13bf3c5ee8d0dfb9827228b792861a50588c19eeb78646d59b422dc3efe
MD5 d08443bebe704cac1b82153d8f8455bd
BLAKE2b-256 528f8d506e4b58da76561e05838bfdd6772cff29683dd1ee41c2eb794616c279

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page