Admin dashboard + satellite clients for multi-model vLLM deployments
Project description
vLLM Cluster Manager
Admin dashboard + satellite clients for multi-model vLLM deployments.
Use this UI to deploy vLLM serve endpoints across a cluster so you can stand up multiple LLM servers (same or different models) with a few clicks. It is ideal for research labs or small business environments that need repeatable, multi-endpoint deployments without building a full MLOps stack.
Deployment is as simple as running the CLI on the host and on each client, with automatic client discovery. You can run in the foreground or with --service to install persistent systemd services.
Tested hardware/software
- GPUs: NVIDIA H100, NVIDIA A100, NVIDIA L40, NVIDIA DGX Spark (GB10), NVIDIA RTX 4090.
- OS: Ubuntu 22.04 and Ubuntu 24.04.
What it can do
- Register and manage GPU nodes that run vLLM workloads.
- Create model configurations and launch models on selected nodes.
- Monitor node health and model status.
- Stream logs from running processes for quick troubleshooting.
Real-time logs
Stream logs from running nodes and model processes directly in the dashboard.
Model configuration
Define and manage model settings (weights, runtime settings, resource usage) from the UI.
Architecture
- Host: Admin services for infrastructure, API, and UI.
- Infra: Postgres + Consul (service discovery) via Docker Compose.
- Backend: FastAPI service for orchestration and persistence.
- Frontend: React + Vite admin dashboard.
- Client: Python agent running on GPU nodes; registers with the host and runs vLLM workloads.
Repo layout
host/Admin services (infra, backend, frontend)client/Satellite node agentimg/Screenshots used in documentation
Prerequisites
Host:
- Docker + Docker Compose plugin (configure the
dockergroup so no sudo is required). - Node.js + npm.
- Python 3.12.
uv(Python package manager).
Client:
- NVIDIA GPU with CUDA.
nvccornvidia-smion PATH (used to detect CUDA version).- Python 3.12 +
python3.12-devandbuild-essential(Debian/Ubuntu).
On Debian/Ubuntu:
sudo apt update
sudo apt install -y python3.12-dev build-essential
Install uv if you don't already have it:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install (pip)
Create and activate a Python 3.12 virtual environment:
uv venv --python=3.12
source .venv/bin/activate
uv pip install vllm-cluster-manager
Start the host
Foreground (no sudo):
vllm-cluster-manager host up --host-ip 127.0.0.1 --host-frontend-port 5173 --host-discover-port 47528
host up builds a static frontend bundle and serves it with the Vite preview server.
The UI assumes it is served at / by default; if you serve it under a subpath (for example /vllm/), pass --base-path /vllm/ so asset URLs and API/WebSocket paths are generated correctly.
Persistent service (systemd):
vllm-cluster-manager host up --service --host-ip 127.0.0.1 --host-frontend-port 5173 --host-discover-port 47528
--host-discover-port sets the discovery port used for clients. Use --host-backend-port to override the backend API port (default 8000).
Stop host services (foreground or systemd):
vllm-cluster-manager host down
Start a client
Foreground (no sudo):
vllm-cluster-manager client up --host-ip 127.0.0.1 --host-discover-port 47528
Persistent service (systemd):
vllm-cluster-manager client up --service --host-ip 127.0.0.1 --host-discover-port 47528
Stop client services (foreground or systemd):
vllm-cluster-manager client down
CLI flags
Host (host up)
| Flag | Default | Description |
|---|---|---|
--service |
false |
Run as a persistent systemd service. |
--host-ip |
127.0.0.1 |
Bind host for the backend API and UI backend target. |
--host-frontend-port |
5173 |
UI port. |
--host-discover-port |
47528 |
Discovery port used by clients. |
--host-backend-port |
8000 |
Backend API port. |
--base-path |
/ |
Base path for the UI (reverse proxy subpath). |
--postgres-host |
127.0.0.1 |
Postgres host. |
--postgres-port |
5757 |
Postgres port. |
--postgres-db |
vllm_admin |
Postgres database name. |
--postgres-user |
vllm |
Postgres user. |
--postgres-password |
change-me |
Postgres password. |
Client (client up)
| Flag | Default | Description |
|---|---|---|
--service |
false |
Run as a persistent systemd service. |
--host-ip |
127.0.0.1 |
Host IP for discovery. |
--host-discover-port |
47528 |
Host discovery port. |
--client-host |
0.0.0.0 |
Client bind host. |
--client-port |
9000 |
Client bind port. |
--node-name |
<hostname> |
Node name used for registration. |
Down commands
host downandclient downstop foreground processes and remove/stop systemd services if present.
Configuration files
The CLI writes service-specific env files under ~/.local/share/vllm_cluster_manager:
host/.env(Docker compose: Postgres + discovery service)host/backend/.env(API service)host/frontend/.env(UI)client/.env(client agent)
If you edit any env file, restart the affected service.
Gated models (Hugging Face)
Some models (for example Llama variants) require a Hugging Face access token. Provide the token via an env var when creating the deployment:
HF_TOKENHUGGING_FACE_HUB_TOKEN
Set the value to your Hugging Face access token (read access) and include quotation marks, for example:
HUGGING_FACE_HUB_TOKEN="hf_..."
You can add this in the UI under env vars or by setting it in the client environment before starting a deployment.
Firewall rules
Allow these network paths (adjust ports to your flags):
- User → Host UI: TCP
host-frontend-port(default 5173). - UI/Browser → Host API: TCP
host-backend-port(default 8000). - Clients → Host discovery port: TCP
host-discover-port(default 47528). - Host → Client agents: TCP
client-port(default 9000).
Data persistence
By default, shutting down the host (host down or stopping the systemd infra unit) runs docker compose down -v, which wipes the Postgres volume. Remove -v in code if you want to keep data.
Quick start (dev)
- Start infrastructure:
cd host
cp .env.example .env
# edit .env for passwords
docker compose up -d
- Backend (venv recommended):
cd host/backend
python -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --reload
- Frontend:
cd host/frontend
npm install
npm run dev
Open the UI at http://localhost:5173 by default (see host/frontend/.env).
Notes
- The service registry is Consul (used for client discovery).
- WebSocket log streaming is handled in
host/frontend/src/services/ws.ts.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vllm_cluster_manager-0.1.5.tar.gz.
File metadata
- Download URL: vllm_cluster_manager-0.1.5.tar.gz
- Upload date:
- Size: 64.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d011fb03ab4016af99b72728241703621a223a92ab5b80ead696390fe646261a
|
|
| MD5 |
ad83e18d59d740ff35cdee643339c452
|
|
| BLAKE2b-256 |
91cd42ec649221c09a122d225d89281de1153a4c8a00c9c4de735a02b5dd44be
|
File details
Details for the file vllm_cluster_manager-0.1.5-py3-none-any.whl.
File metadata
- Download URL: vllm_cluster_manager-0.1.5-py3-none-any.whl
- Upload date:
- Size: 79.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b5ef2675424c56dc0df9af3f49d782d024de2053f2d62fd6c9ea6f57b1140a1
|
|
| MD5 |
856aa662c892d26c2ebb1cbdb32ee6e7
|
|
| BLAKE2b-256 |
479a9f259e71734a29e06ba860998d67990acd18af35cb9154a48bfab0f6d398
|