Distributed Inference Framework
Project description
NVIDIA Dynamo
| Guides | Architecture and Features | APIs | SDK |
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
- Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
- LLM-aware request routing – Eliminates unnecessary KV cache re-computation
- Accelerated data transfer – Reduces inference response time using NIXL.
- KV cache offloading – Leverages multiple memory hierarchies for higher system throughput
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
Installation
The following examples require a few system level packages.
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev libucx0
pip install ai-dynamo[all]
[!NOTE] TensorRT-LLM Support is currently available on a branch
Running and Interacting with an LLM Locally
To run a model and interact with it locally you can call dynamo run with a hugging face model. dynamo run supports several backends
including: mistralrs, sglang, vllm, and tensorrtllm.
Example Command
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
LLM Serving
Dynamo provides a simple way to spin up a local set of inference components including:
- OpenAI Compatible Frontend – High performance OpenAI compatible http api server written in Rust.
- Basic and Kv Aware Router – Route and load balance traffic to a set of workers.
- Workers – Set of pre-configured LLM serving engines.
To run a minimal configuration you can use a pre-configured example.
Start Dynamo Distributed Runtime Services
First start the Dynamo Distributed Runtime services:
docker compose -f deploy/docker-compose.yml up -d
Start Dynamo LLM Serving Components
Next serve a minimal configuration with an http server, basic round-robin router, and a single worker.
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
Send a Request
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream":false,
"max_tokens": 300
}' | jq
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_dynamo-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ai_dynamo-0.1.0-py3-none-any.whl
- Upload date:
- Size: 41.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5b867fcfcae16f34e270adda96c3a12382c3255bc95621b96bec3420621f6c5e
|
|
| MD5 |
7eec30ef403805ef0fbb85da83156553
|
|
| BLAKE2b-256 |
0f7a2343aec2e5bc033a7d313c67a72e4053b819c9c7aa60da77efb839516c6b
|