Skip to main content

llama-index llms neutrino integration

Project description

LlamaIndex Llms Integration: Neutrino

Installation

To install the required packages, run:

%pip install llama-index-llms-neutrino
!pip install llama-index

Setup

Create Neutrino API Key

You can create an API key by visiting platform.neutrinoapp.com. Once you have the API key, set it as an environment variable:

import os

os.environ["NEUTRINO_API_KEY"] = "<your-neutrino-api-key>"

Using Your Router

A router is a collection of LLMs that you can route queries to. You can create a router in the Neutrino dashboard or use the default router, which includes all supported models. You can treat a router as a single LLM.

Initialize Neutrino

Create an instance of the Neutrino model:

from llama_index.llms.neutrino import Neutrino

llm = Neutrino(
    # api_key="<your-neutrino-api-key>",
    # router="<your-router-id>"  # Use 'default' for the default router
)

Generate Completions

To generate a text completion for a prompt, use the complete method:

response = llm.complete("In short, a Neutrino is")
print(f"Optimal model: {response.raw['model']}")
print(response)

Chat Responses

To send a chat message and receive a response, create a ChatMessage and use the chat method:

from llama_index.core.llms import ChatMessage

message = ChatMessage(
    role="user",
    content="Explain the difference between statically typed and dynamically typed languages.",
)

resp = llm.chat([message])
print(f"Optimal model: {resp.raw['model']}")
print(resp)

Streaming Responses

To stream responses for a chat message, use the stream_chat method:

message = ChatMessage(
    role="user", content="What is the approximate population of Mexico?"
)

resp = llm.stream_chat([message])
for i, r in enumerate(resp):
    if i == 0:
        print(f"Optimal model: {r.raw['model']}")
    print(r.delta, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/neutrino/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_neutrino-0.3.2.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_neutrino-0.3.2-py3-none-any.whl (4.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_neutrino-0.3.2.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.3.2.tar.gz
Algorithm Hash digest
SHA256 f4c628add2f78b01b0c31d460d9236bbdd85ab323a7248d3edb14fe51539e1b7
MD5 1728ad2f2a365595db17e2012074e1cd
BLAKE2b-256 314c25e17a6de03c8add741634e1008a70a00286c87cd8a828a1523acd116c88

See more details on using hashes here.

File details

Details for the file llama_index_llms_neutrino-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d2dc4bbedbec3a52a9a85c6457379bd833a645600c5a1925e5643ddd633c9de1
MD5 2e352229752fb012c923fcfc8cf3cc65
BLAKE2b-256 add135002f61d6438e71048f334146e79193e74d441c2a2e77d34615dea149d4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page