Gradient AI Platform Agent Development Kit
Project description
DigitalOcean Gradient™ Agent Development Kit (ADK)
The DigitalOcean Gradient™ Agent Development Kit (ADK) is a Python toolkit designed to help you build, deploy, and operate production-grade AI agents with zero infrastructure overhead.
Building AI agents is challenging enough without worrying about observability, evaluations, and deployment infrastructure. We built the Gradient™ ADK with one simple aim: bring your agent code, and we handle the rest—bringing the simplicity you love about DigitalOcean to AI agents.
Why Use DigitalOcean Gradient™ ADK?
-
Framework Agnostic: Bring your existing agent code—whether built with LangGraph, LangChain, CrewAI, PydanticAI, or any Python framework. No rewrites, no lock-in.
-
Pay Per Use: Only pay for what you use with serverless agent hosting. Currently provided at no compute cost during Public Preview!
-
Any LLM Provider: Use OpenAI, Anthropic, Google, or DigitalOcean's own Gradient™ AI serverless inference—your choice, your keys.
-
Built-in Observability: Get automatic traces, evaluations, and insights out of the box. No OpenTelemetry setup, no third-party integrations required.
-
Production Ready from Day One: Deploy with a single command to DigitalOcean's managed infrastructure. Focus on building your agent, not managing servers.
-
Seamless DigitalOcean Integration: Connect effortlessly to the DigitalOcean ecosystem—Knowledge Bases for RAG, Serverless Inference for LLMs, built-in Evaluations, and more.
Features
🛠️ CLI (Command Line Interface)
- Local Development: Run and test your agents locally with hot-reload support
- Seamless Deployment: Deploy agents to DigitalOcean with a single command
- Evaluation Framework: Run comprehensive evaluations with custom metrics and datasets
- Observability: View traces and runtime logs directly from the CLI
🚀 Runtime Environment
- Framework Agnostic: Works with any Python framework for building AI agents
- Automatic LangGraph Integration: Built-in trace capture for LangGraph nodes and state transitions
- Custom Decorators: Capture traces from any framework using
@tracedecorators - Streaming Support: Full support for streaming responses with trace capture
- Production Ready: Designed for seamless deployment to DigitalOcean infrastructure
Installation
pip install gradient-adk
Quick Start
🎥 Watch the Getting Started Video for a complete walkthrough
1. Initialize a New Agent Project
gradient agent init
This creates a new agent project with:
main.py- Agent entrypoint with example codeagents/- Directory for agent implementationstools/- Directory for custom toolsconfig.yaml- Agent configurationrequirements.txt- Python dependencies
2. Run Locally
gradient agent run
Your agent will be available at http://localhost:8080 with automatic trace capture enabled.
3. Deploy to DigitalOcean
export DIGITALOCEAN_API_TOKEN=your_token_here
gradient agent deploy
4. Evaluate Your Agent
gradient agent evaluate \
--test-case-name "my-evaluation" \
--dataset-file evaluation_dataset.csv \
--categories correctness,context_quality
Usage Examples
Using LangGraph (Automatic Trace Capture)
LangGraph agents automatically capture traces for all nodes and state transitions:
from gradient_adk import entrypoint, RequestContext
from langgraph.graph import StateGraph
from typing import TypedDict
class State(TypedDict):
input: str
output: str
async def llm_call(state: State) -> State:
# This node execution is automatically traced
response = await llm.ainvoke(state["input"])
state["output"] = response
return state
@entrypoint
async def main(input: dict, context: RequestContext):
graph = StateGraph(State)
graph.add_node("llm_call", llm_call)
graph.set_entry_point("llm_call")
graph = graph.compile()
result = await graph.ainvoke({"input": input.get("query")})
return result["output"]
Using Custom Decorators (Any Framework)
For frameworks beyond LangGraph, use trace decorators to capture custom spans:
from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever, RequestContext
@trace_retriever("vector_search")
async def search_knowledge_base(query: str):
# Retriever spans capture search/lookup operations
results = await vector_db.search(query)
return results
@trace_llm("generate_response")
async def generate_response(prompt: str):
# LLM spans capture model calls with token usage
response = await llm.generate(prompt)
return response
@trace_tool("calculate")
async def calculate(x: int, y: int):
# Tool spans capture function execution
return x + y
@entrypoint
async def main(input: dict, context: RequestContext):
docs = await search_knowledge_base(input["query"])
result = await calculate(5, 10)
response = await generate_response(f"Context: {docs}")
return response
Streaming Responses
The runtime supports streaming responses with automatic trace capture:
from gradient_adk import entrypoint, RequestContext
@entrypoint
async def main(input: dict, context: RequestContext):
# Stream text chunks
async def generate_chunks():
async for chunk in llm.stream(input["query"]):
yield chunk
CLI Commands
Agent Management
# Initialize new project
gradient agent init
# Configure existing project
gradient agent configure
# Run locally with hot-reload
gradient agent run --dev
# Deploy to DigitalOcean
gradient agent deploy
# View runtime logs
gradient agent logs
# Open traces UI
gradient agent traces
Evaluation
You can evaluate your deployed agent with a number of useful evaluation metrics. See the DigitalOcean docs for details on what belongs in a dataset.
# Run evaluation (interactive)
gradient agent evaluate
# Run evaluation (non-interactive)
gradient agent evaluate \
--test-case-name "my-test" \
--dataset-file data.csv \
--categories correctness,safety_and_security \
--star-metric-name "Correctness (general hallucinations)" \
--success-threshold 80.0
Tracing
The ADK provides comprehensive tracing capabilities to capture and analyze your agent's execution. You can use decorators for wrapping functions or programmatic functions for manual span creation.
What Gets Traced Automatically
- LangGraph Nodes: All node executions, state transitions, and edges (including LLM calls, tool calls, and DigitalOcean Knowledge Base calls)
- HTTP Requests: Request/response payloads for LLM API calls
- Errors: Full exception details and stack traces
- Streaming Responses: Individual chunks and aggregated outputs
Tracing Decorators
Use decorators to automatically trace function executions:
from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever, RequestContext
@trace_llm("model_call")
async def call_model(prompt: str):
"""LLM spans capture model calls with token usage."""
response = await llm.generate(prompt)
return response
@trace_tool("calculator")
async def calculate(x: int, y: int):
"""Tool spans capture function/tool execution."""
return x + y
@trace_retriever("vector_search")
async def search_docs(query: str):
"""Retriever spans capture search/lookup operations."""
results = await vector_db.search(query)
return results
@entrypoint
async def main(input: dict, context: RequestContext):
docs = await search_docs(input["query"])
result = await calculate(5, 10)
response = await call_model(f"Context: {docs}")
return response
Programmatic Span Functions
For more control over span creation, use the programmatic functions. These are useful when you can't use decorators or need to add spans for code you don't control:
from gradient_adk import entrypoint, add_llm_span, add_tool_span, add_agent_span, RequestContext
@entrypoint
async def main(input: dict, context: RequestContext):
# Add an LLM span with detailed metadata
response = await external_llm_call(input["query"])
add_llm_span(
name="external_llm_call",
input={"messages": [{"role": "user", "content": input["query"]}]},
output={"response": response},
model="gpt-4",
num_input_tokens=100,
num_output_tokens=50,
temperature=0.7,
)
# Add a tool span
tool_result = await run_tool(input["data"])
add_tool_span(
name="data_processor",
input={"data": input["data"]},
output={"result": tool_result},
tool_call_id="call_abc123",
metadata={"tool_version": "1.0"},
)
# Add an agent span for sub-agent calls
agent_result = await call_sub_agent(input["task"])
add_agent_span(
name="research_agent",
input={"task": input["task"]},
output={"result": agent_result},
metadata={"agent_type": "research"},
tags=["sub-agent", "research"],
)
return {"response": response, "tool_result": tool_result, "agent_result": agent_result}
Available Span Functions
| Function | Description | Key Optional Fields |
|---|---|---|
add_llm_span() |
Record LLM/model calls | model, temperature, num_input_tokens, num_output_tokens, total_tokens, tools, time_to_first_token_ns |
add_tool_span() |
Record tool/function executions | tool_call_id |
add_agent_span() |
Record agent/sub-agent executions | — |
Common optional fields for all span functions: duration_ns, metadata, tags, status_code
Viewing Traces
Traces are:
- Automatically sent to DigitalOcean's Gradient Platform
- Available in real-time through the web console
- Accessible via
gradient agent tracescommand
Environment Variables
# Required for deployment and evaluations
export DIGITALOCEAN_API_TOKEN=your_do_api_token
# Required for Gradient serverless inference (if using)
export GRADIENT_MODEL_ACCESS_KEY=your_gradient_key
# Optional: Enable verbose trace logging
export GRADIENT_VERBOSE=1
Project Structure
my-agent/
├── main.py # Agent entrypoint with @entrypoint decorator
├── .gradient/agent.yml # Agent configuration (auto-generated)
├── requirements.txt # Python dependencies
├── .env # Environment variables (not committed)
├── agents/ # Agent implementations
│ └── my_agent.py
└── tools/ # Custom tools
└── my_tool.py
Framework Compatibility
The Gradient ADK is designed to work with any Python-based AI agent framework:
- ✅ LangGraph - Automatic trace capture (zero configuration)
- ✅ LangChain - Use trace decorators (
@trace_llm,@trace_tool,@trace_retriever) for custom spans - ✅ CrewAI - Use trace decorators for agent and task execution
- ✅ Custom Frameworks - Use trace decorators for any function
Support
- Templates/Examples: https://github.com/digitalocean/gradient-adk-templates
- Gradient™ AI Platform: https://www.digitalocean.com/products/gradient/platform
- Documentation: https://docs.digitalocean.com/products/gradient-ai-platform/how-to/build-agents-using-adk/
- API Reference: https://docs.digitalocean.com/reference/api
- Community: DigitalOcean Community Forums
License
Licensed under the Apache License 2.0. See LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gradient_adk-0.2.11.tar.gz.
File metadata
- Download URL: gradient_adk-0.2.11.tar.gz
- Upload date:
- Size: 90.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3ae79e897adc0b71f8fd365f8c0602b0ba2ba9eb9853d1efb476a21f9e589d0f
|
|
| MD5 |
1e95366ec20fe80b45471de137d3b138
|
|
| BLAKE2b-256 |
c37d2ae5de663480b11d77f828d7636530878e1dbea84c4117360fd7476c7430
|
File details
Details for the file gradient_adk-0.2.11-py3-none-any.whl.
File metadata
- Download URL: gradient_adk-0.2.11-py3-none-any.whl
- Upload date:
- Size: 99.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a9138880ac916a5acd2f62e41f66656f30241fcc8b517af012cabd27cb2b867
|
|
| MD5 |
5f15076d379b410eee1345e2a05fd653
|
|
| BLAKE2b-256 |
61ff879bb72026b25713a6ceec2be4e7f09b7c176a7a7e5db8b6cb328d63f1d0
|