Skip to main content

llama-index llms neutrino integration

Project description

LlamaIndex Llms Integration: Neutrino

Installation

To install the required packages, run:

%pip install llama-index-llms-neutrino
!pip install llama-index

Setup

Create Neutrino API Key

You can create an API key by visiting platform.neutrinoapp.com. Once you have the API key, set it as an environment variable:

import os

os.environ["NEUTRINO_API_KEY"] = "<your-neutrino-api-key>"

Using Your Router

A router is a collection of LLMs that you can route queries to. You can create a router in the Neutrino dashboard or use the default router, which includes all supported models. You can treat a router as a single LLM.

Initialize Neutrino

Create an instance of the Neutrino model:

from llama_index.llms.neutrino import Neutrino

llm = Neutrino(
    # api_key="<your-neutrino-api-key>",
    # router="<your-router-id>"  # Use 'default' for the default router
)

Generate Completions

To generate a text completion for a prompt, use the complete method:

response = llm.complete("In short, a Neutrino is")
print(f"Optimal model: {response.raw['model']}")
print(response)

Chat Responses

To send a chat message and receive a response, create a ChatMessage and use the chat method:

from llama_index.core.llms import ChatMessage

message = ChatMessage(
    role="user",
    content="Explain the difference between statically typed and dynamically typed languages.",
)

resp = llm.chat([message])
print(f"Optimal model: {resp.raw['model']}")
print(resp)

Streaming Responses

To stream responses for a chat message, use the stream_chat method:

message = ChatMessage(
    role="user", content="What is the approximate population of Mexico?"
)

resp = llm.stream_chat([message])
for i, r in enumerate(resp):
    if i == 0:
        print(f"Optimal model: {r.raw['model']}")
    print(r.delta, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/neutrino/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_neutrino-0.4.1.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_neutrino-0.4.1-py3-none-any.whl (4.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_neutrino-0.4.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.4.1.tar.gz
Algorithm Hash digest
SHA256 33077d900c35deaaae6fb39dd399ada0c4540294dba3b1c8609e044781529b7d
MD5 39de67630756818ea427765e46592a5d
BLAKE2b-256 e2e4b2125ad8c75cfe703543074e029cd6ef34ee7f4ae6556ce112f9e67ac510

See more details on using hashes here.

File details

Details for the file llama_index_llms_neutrino-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1c5a89a01eab56a617083f7361e75f8160a9158b7d7c4b056f1735f8075216af
MD5 d3c0a44af67a39bc789b4f349dac4447
BLAKE2b-256 7fb1b45fcf70bccc37aaf30293424bea34521bd985e6113530257d2802e10e15

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page