LlamaIndex integration for DigitalOcean Gradient AI
Project description
llama-index-llms-digitalocean-gradientai
LlamaIndex integration for DigitalOcean Gradient AI with full support for function/tool calling.
Installation
pip install llama-index-llms-digitalocean-gradientai
This package uses the official gradient SDK (PyPI package: gradient) under the hood; it is installed automatically as a dependency.
Usage
Basic Usage
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
llm = DigitalOceanGradientAILLM(
model="openai-gpt-oss-120b",
model_access_key="your-api-key",
workspace_id="your-workspace-id" # Optional
)
response = llm.complete("What is DigitalOcean Gradient?")
print(response.text)
Chat Interface
from llama_index.core.llms import ChatMessage
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
llm = DigitalOceanGradientAILLM(
model="openai-gpt-oss-120b",
model_access_key="your-api-key",
)
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="What is the capital of France?")
]
response = llm.chat(messages)
print(response.message.content)
Streaming
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
llm = DigitalOceanGradientAILLM(
model="meta-llama-3-70b-instruct",
model_access_key="your-api-key",
)
# Streaming completion
for chunk in llm.stream_complete("Tell me a story about AI:"):
print(chunk.delta, end="", flush=True)
Async Usage
import asyncio
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
async def main():
llm = DigitalOceanGradientAILLM(
model="meta-llama-3-70b-instruct",
model_access_key="your-api-key",
)
response = await llm.acomplete("What is Gradient?")
print(response.text)
asyncio.run(main())
Function/Tool Calling
This integration supports OpenAI-compatible function calling, enabling the LLM to invoke tools based on user queries.
Using chat_with_tools
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
from llama_index.core.tools import FunctionTool
# Define tools
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
# Create tool instances
add_tool = FunctionTool.from_defaults(fn=add)
multiply_tool = FunctionTool.from_defaults(fn=multiply)
tools = [add_tool, multiply_tool]
# Initialize LLM
llm = DigitalOceanGradientAILLM(
model="openai-gpt-oss-120b",
model_access_key="your-api-key",
)
# Chat with tools
response = llm.chat_with_tools(
tools=tools,
user_msg="What is 5 multiplied by 8?",
)
print(response.message)
# Extract tool calls from response
tool_calls = llm.get_tool_calls_from_response(
response,
error_on_no_tool_call=False
)
for tool_call in tool_calls:
print(f"Tool: {tool_call.tool_name}, Args: {tool_call.tool_kwargs}")
Using predict_and_call
For automatic tool execution and result handling:
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
from llama_index.core.tools import FunctionTool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
llm = DigitalOceanGradientAILLM(
model="openai-gpt-oss-120b",
model_access_key="your-api-key",
)
# Automatically calls the tool and returns the result
response = llm.predict_and_call(
tools=[add_tool],
user_msg="What is 10 plus 15?",
)
print(response) # Output: 25
Async Function Calling
import asyncio
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
from llama_index.core.tools import FunctionTool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
async def main():
llm = DigitalOceanGradientAILLM(
model="openai-gpt-oss-120b",
model_access_key="your-api-key",
)
response = await llm.achat_with_tools(
tools=[multiply_tool],
user_msg="What is 7 times 9?",
)
print(response.message)
asyncio.run(main())
With RAG Pipeline
from llama_index.core import VectorStoreIndex, Document
from llama_index.llms.digitalocean.gradientai import DigitalOceanGradientAILLM
llm = DigitalOceanGradientAILLM(
model="meta-llama-3-70b-instruct",
model_access_key="your-api-key",
)
documents = [Document(text="DigitalOcean Gradient is a managed LLM API service...")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=llm)
response = query_engine.query("What is Gradient?")
print(response)
Package Structure
llama-index-llms-digitalocean-gradientai/
├── llama_index/
│ └── llms/
│ └── digitalocean/
│ └── gradientai/
│ ├── __init__.py
│ └── base.py
├── tests/
│ └── test_gradient_llm.py
├── pyproject.toml
├── README.md
└── LICENSE
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_index_llms_digitalocean_gradientai-0.1.5.tar.gz.
File metadata
- Download URL: llama_index_llms_digitalocean_gradientai-0.1.5.tar.gz
- Upload date:
- Size: 12.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
123f1bc3ad02b8d390f91d8ababd1669cf3d8e02ccbf67a38d230c66fbe36dfd
|
|
| MD5 |
38c5860942e2d636cf90e3a6179f2d1f
|
|
| BLAKE2b-256 |
3903ef4da5bf1c7cbda40201f368ed8a5c4d02d7f17962a9310ce63bc3345e7c
|
File details
Details for the file llama_index_llms_digitalocean_gradientai-0.1.5-py3-none-any.whl.
File metadata
- Download URL: llama_index_llms_digitalocean_gradientai-0.1.5-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e703e5c4e5cb8bf8d67c7487c6eeb6f09078a13d30b918366fb7b1497573015
|
|
| MD5 |
1b471e94b46473ea7782d68e58d5caa5
|
|
| BLAKE2b-256 |
b1a362dd30c9f04cad465c69b8c063ac3ccd0ca6135fab364b361d4644242f48
|