Skip to main content

LlamaIndex integration for DigitalOcean Gradient AI

Project description

llama-index-llms-digitalocean-gradientai

LlamaIndex integration for DigitalOcean Gradient AI LLM.

Installation

pip install llama-index-llms-digitalocean-gradientai

This package uses the official gradient SDK (PyPI package: gradient) under the hood; it is installed automatically as a dependency.

Or install from source:

git clone https://github.com/yourusername/llama-index-llms-digitalocean-gradientai
cd llama-index-llms-digitalocean-gradientai
pip install -e .

Usage

Basic Usage

from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM

llm = DigitalOceanGradientAILLM(
    model="meta-llama-3-70b-instruct",
    api_key="your-api-key",
    workspace_id="your-workspace-id"
)

response = llm.complete("What is DigitalOcean Gradient?")
print(response)

Using Environment Variables

import os
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM

os.environ["GRADIENT_API_KEY"] = "your-api-key"
os.environ["GRADIENT_WORKSPACE_ID"] = "your-workspace-id"

llm = DigitalOceanGradientAILLM(model="meta-llama-3-70b-instruct")

You can also use GRADIENT_MODEL_ACCESS_KEY (recommended) in place of GRADIENT_API_KEY.

Chat Interface

from llama_index.core.llms import ChatMessage
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM

llm = DigitalOceanGradientAILLM(
    model="meta-llama-3-70b-instruct",
    api_key="your-api-key",
    workspace_id="your-workspace-id"
)

messages = [
    ChatMessage(role="system", content="You are a helpful assistant."),
    ChatMessage(role="user", content="What is Gradient?")
]

response = llm.chat(messages)
print(response.message.content)

Streaming

from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM

llm = DigitalOceanGradientAILLM(
    model="meta-llama-3-70b-instruct",
    api_key="your-api-key",
    workspace_id="your-workspace-id"
)

response_gen = llm.stream_complete("Tell me a story about AI:")
for delta in response_gen:
    print(delta.delta, end="", flush=True)

Async Usage

import asyncio
from llama_index.llms.gradient import DigitalOceanGradientAILLM

async def main():
llm = DigitalOceanGradientAILLM(
        model="meta-llama-3-70b-instruct",
        api_key="your-api-key",
        workspace_id="your-workspace-id"
    )
    response = await llm.acomplete("What is Gradient?")
    print(response)

asyncio.run(main())

With RAG Pipeline

from llama_index.core import VectorStoreIndex, Document
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM

llm = DigitalOceanGradientAILLM(
    model="meta-llama-3-70b-instruct",
    api_key="your-api-key",
    workspace_id="your-workspace-id"
)

documents = [Document(text="DigitalOcean Gradient is a managed LLM API service...")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=llm)
response = query_engine.query("What is Gradient?")
print(response)

Package Structure

llama-index-llms-digitalocean-gradientai/
├── llama_index/
│   └── llms/
│       └── digitalocean/
│           └── gradientai/
│               ├── __init__.py
│               └── base.py
├── setup.py
├── pyproject.toml
├── README.md
└── requirements.txt

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_llms_digitalocean_gradientai-0.1.2.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_digitalocean_gradientai-0.1.2.tar.gz
Algorithm Hash digest
SHA256 1a524b6e83b50ec55b771987c09597e3c3c544d732d82ab35e7ff133908473ce
MD5 8acf7f8f61cfe19eb241842207a3630f
BLAKE2b-256 e60b34d1bebc298bb77961b70486a800e3a65eead8d072c944b06d6a1b88c2d5

See more details on using hashes here.

File details

Details for the file llama_index_llms_digitalocean_gradientai-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_digitalocean_gradientai-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 abb2a79f30c2a7f2e60768fa97e41c4dfe0278b6638ff4d9bf20a144ec8be28f
MD5 ba6fd623d4de1ba477ff1c8009beaf56
BLAKE2b-256 9fad30af343e3d4db77850c84a4a0bd0f00282ef7977051ff28e1e913524cec9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page