llama-index llms nvidia api catalog integration
Project description
NVIDIA NIMs
The llama-index-llms-nvidia
package contains LlamaIndex integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing, NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, giving enterprises ownership and full control of their IP and AI application.
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
NVIDIA's LLM connector
This example goes over how to use LlamaIndex to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints.
With this connector, you'll be able to connect to and generate from compatible models available as hosted NVIDIA NIMs, such as:
- Google's gemma-7b
- Mistal AI's mistral-7b-instruct-v0.2
- And more!
Installation
pip install llama-index-llms-nvidia
Setup
To get started:
-
Create a free account with NVIDIA, which hosts NVIDIA AI Foundation models.
-
Click on your model of choice.
-
Under Input select the Python tab, and click
Get API Key
. Then clickGenerate Key
. -
Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
import getpass
import os
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith(
"nvapi-"
), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
Working with API Catalog
from llama_index.llms.nvidia import NVIDIA
from llama_index.core.llms import ChatMessage, MessageRole
llm = NVIDIA()
messages = [
ChatMessage(
role=MessageRole.SYSTEM, content=("You are a helpful assistant.")
),
ChatMessage(
role=MessageRole.USER,
content=("What are the most popular house pets in North America?"),
),
]
llm.chat(messages)
Working with NVIDIA NIMs
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.
from llama_index.llms.nvidia import NVIDIA
# connect to an chat NIM running at localhost:8080
llm = NVIDIA(base_url="http://localhost:8080/v1")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_nvidia-0.2.5.tar.gz
.
File metadata
- Download URL: llama_index_llms_nvidia-0.2.5.tar.gz
- Upload date:
- Size: 6.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ba13d6cf32b01100063ef2903fc1e66022dd2e2e6ab896c85769a305e79c76c |
|
MD5 | f3f1de578510be666fb68d2a09e1792b |
|
BLAKE2b-256 | 8e7fec96071e5aa47d5452b0ae80453446e4ea340f324c5438898554c8e68910 |
File details
Details for the file llama_index_llms_nvidia-0.2.5-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_nvidia-0.2.5-py3-none-any.whl
- Upload date:
- Size: 7.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 110c716f627e3ab2b872aeff0f6efe004e76d7d22da1cfde56b39ee0087e1c82 |
|
MD5 | 36e8a97b5027c2c7cd718a93c3952623 |
|
BLAKE2b-256 | 67ba5412510a266f57279d09487f8ba562ba7bee039974be39d079c8df4a7456 |