Skip to main content

llama-index llms neutrino integration

Project description

LlamaIndex Llms Integration: Neutrino

Installation

To install the required packages, run:

%pip install llama-index-llms-neutrino
!pip install llama-index

Setup

Create Neutrino API Key

You can create an API key by visiting platform.neutrinoapp.com. Once you have the API key, set it as an environment variable:

import os

os.environ["NEUTRINO_API_KEY"] = "<your-neutrino-api-key>"

Using Your Router

A router is a collection of LLMs that you can route queries to. You can create a router in the Neutrino dashboard or use the default router, which includes all supported models. You can treat a router as a single LLM.

Initialize Neutrino

Create an instance of the Neutrino model:

from llama_index.llms.neutrino import Neutrino

llm = Neutrino(
    # api_key="<your-neutrino-api-key>",
    # router="<your-router-id>"  # Use 'default' for the default router
)

Generate Completions

To generate a text completion for a prompt, use the complete method:

response = llm.complete("In short, a Neutrino is")
print(f"Optimal model: {response.raw['model']}")
print(response)

Chat Responses

To send a chat message and receive a response, create a ChatMessage and use the chat method:

from llama_index.core.llms import ChatMessage

message = ChatMessage(
    role="user",
    content="Explain the difference between statically typed and dynamically typed languages.",
)

resp = llm.chat([message])
print(f"Optimal model: {resp.raw['model']}")
print(resp)

Streaming Responses

To stream responses for a chat message, use the stream_chat method:

message = ChatMessage(
    role="user", content="What is the approximate population of Mexico?"
)

resp = llm.stream_chat([message])
for i, r in enumerate(resp):
    if i == 0:
        print(f"Optimal model: {r.raw['model']}")
    print(r.delta, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/neutrino/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_neutrino-0.4.0.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_neutrino-0.4.0-py3-none-any.whl (4.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_neutrino-0.4.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.4.0.tar.gz
Algorithm Hash digest
SHA256 3a4e00fe1ffdbe5ffbb7935cde5a3cc46d8a2271d03088b587c720684cff01d0
MD5 2389efa30c1eed960a2a853e0546202c
BLAKE2b-256 c77b3077a2569c268710f166a8b69901e581a7ed56d03b0373581e1c9e4d6492

See more details on using hashes here.

File details

Details for the file llama_index_llms_neutrino-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 758f23c4a00aa77b2a31c5bb99330b3f573a970d447fddc5074e36a70b933e33
MD5 4837bc5383c766a8116631febea72135
BLAKE2b-256 33f1c30abeeb98e125f55468b24c84b0d1244ecda467f82ce24291e79e83a23a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page