Skip to main content

llama-index multi_modal nvidia integration

Project description

LlamaIndex Multi_Modal Integration: Nvidia

This project integrates Nvidia vlm into the LlamaIndex framework, enabling advanced multimodal capabilities for various AI applications.

Features


Installation

pip install llama-index-multi-modal-llms-nvidia

Make sure to set your NVIDIA API key as an environment variable:

export NVIDIA_API_KEY=your_api_key_here

Usage

Here's a basic example of how to use the Nvidia vlm integration:

from llama_index.multi_modal_llms.nvidia import NVIDIAMultiModal
from llama_index.core.schema import ImageDocument

# Initialize the model
model = NVIDIAMultiModal()

# Prepare your image and prompt
image_document = ImageDocument(image_path="path/to/your/image.jpg")
prompt = "Describe this image in detail."

# Generate a response
response = model.complete(prompt, image_documents=[image_document])

print(response.text)

Streaming

from llama_index.multi_modal_llms.nvidia import NVIDIAMultiModal
from llama_index.core.schema import ImageDocument

# Initialize the model
model = NVIDIAMultiModal()

# Prepare your image and prompt
image_document = ImageDocument(image_path="downloaded_image.jpg")
prompt = "Describe this image in detail."

import nest_asyncio
import asyncio

nest_asyncio.apply()

response = model.stream_complete(
    prompt=f"Describe the image",
    image_documents=[
        ImageDocument(metadata={"asset_id": asset_id}, mimetype="png")
    ],
)

for r in response:
    print(r.text, end="")

Passing an image as an NVCF asset

If your image is sufficiently large or you will pass it multiple times in a chat conversation, you may upload it once and reference it in your chat conversation

See https://docs.nvidia.com/cloud-functions/user-guide/latest/cloud-function/assets.html for details about how upload the image.

import requests

content_type = "image/jpg"
description = "example-image-from-lc-nv-ai-e-notebook"

create_response = requests.post(
    "https://api.nvcf.nvidia.com/v2/nvcf/assets",
    headers={
        "Authorization": f"Bearer {os.environ['NVIDIA_API_KEY']}",
        "accept": "application/json",
        "Content-Type": "application/json",
    },
    json={"contentType": content_type, "description": description},
)
create_response.raise_for_status()

upload_response = requests.put(
    create_response.json()["uploadUrl"],
    headers={
        "Content-Type": content_type,
        "x-amz-meta-nvcf-asset-description": description,
    },
    data=img_response.content,
)
upload_response.raise_for_status()

asset_id = create_response.json()["assetId"]

response = llm.complete(
    prompt=f"Describe the image",
    image_documents=[
        ImageDocument(metadata={"asset_id": asset_id}, mimetype="png")
    ],
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_multi_modal_llms_nvidia-0.5.0.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_multi_modal_llms_nvidia-0.5.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_multi_modal_llms_nvidia-0.5.0.tar.gz
Algorithm Hash digest
SHA256 24d7f6511dfda74d81675dac31472d4966155c40929f80e04c1b24ac18a7d5fc
MD5 7470de59cf589bc710f949b59c7f76b0
BLAKE2b-256 b95bc857dc4abc775de4726e9f59fb392d5b3e68c367663b1119d0a85a34d3b5

See more details on using hashes here.

File details

Details for the file llama_index_multi_modal_llms_nvidia-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_multi_modal_llms_nvidia-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7a8dec9d3e86b864c95e4e40f58df450c99459b2bfc885547d7f3a251be1a4ba
MD5 2cb046b8785e633f8c58355d81e690b1
BLAKE2b-256 66e65cc5a44dd25a0ecfdbf122bf269cd8714b57d8f4311e1f42d71d062e4b54

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page