Skip to main content

llama-index llms neutrino integration

Project description

LlamaIndex Llms Integration: Neutrino

Installation

To install the required packages, run:

%pip install llama-index-llms-neutrino
!pip install llama-index

Setup

Create Neutrino API Key

You can create an API key by visiting platform.neutrinoapp.com. Once you have the API key, set it as an environment variable:

import os

os.environ["NEUTRINO_API_KEY"] = "<your-neutrino-api-key>"

Using Your Router

A router is a collection of LLMs that you can route queries to. You can create a router in the Neutrino dashboard or use the default router, which includes all supported models. You can treat a router as a single LLM.

Initialize Neutrino

Create an instance of the Neutrino model:

from llama_index.llms.neutrino import Neutrino

llm = Neutrino(
    # api_key="<your-neutrino-api-key>",
    # router="<your-router-id>"  # Use 'default' for the default router
)

Generate Completions

To generate a text completion for a prompt, use the complete method:

response = llm.complete("In short, a Neutrino is")
print(f"Optimal model: {response.raw['model']}")
print(response)

Chat Responses

To send a chat message and receive a response, create a ChatMessage and use the chat method:

from llama_index.core.llms import ChatMessage

message = ChatMessage(
    role="user",
    content="Explain the difference between statically typed and dynamically typed languages.",
)

resp = llm.chat([message])
print(f"Optimal model: {resp.raw['model']}")
print(resp)

Streaming Responses

To stream responses for a chat message, use the stream_chat method:

message = ChatMessage(
    role="user", content="What is the approximate population of Mexico?"
)

resp = llm.stream_chat([message])
for i, r in enumerate(resp):
    if i == 0:
        print(f"Optimal model: {r.raw['model']}")
    print(r.delta, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/neutrino/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_neutrino-0.3.0.tar.gz (3.1 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_llms_neutrino-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_neutrino-0.3.0.tar.gz
  • Upload date:
  • Size: 3.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_llms_neutrino-0.3.0.tar.gz
Algorithm Hash digest
SHA256 4776322e595411c9f55c83fceb67722995dbf4a98adfa89437fc4ba8138bd709
MD5 38aa4d0ae499d4b79443bde45b7b53e9
BLAKE2b-256 bbbfb3cd16f67842e74744292fb8d9813b1f6983ff6a75ff2006d917ed8be6d3

See more details on using hashes here.

File details

Details for the file llama_index_llms_neutrino-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_neutrino-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d4843d5259074fcf565f3a51bea02a8bec32ae990f2b6ae71b0e0fb6b10cbf3b
MD5 fbd967bf39936b3bd146f542a6073648
BLAKE2b-256 50e62c5b004eaed1a4184cac23ec89cb56859dbad363ed3e8c523b240f82137f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page