Skip to main content

GPU Service Manager for ML workloads

Project description

gpumod

GPU Service Manager for ML workloads on Linux/NVIDIA systems.

gpumod manages vLLM, llama.cpp, FastAPI, and Docker-based inference services on NVIDIA GPUs. It tracks VRAM allocation, supports mode-based service switching, provides VRAM simulation before deployment, and exposes an MCP server for AI assistant integration.

Features

  • Service Management -- Register, start, stop, and monitor GPU services with support for vLLM, llama.cpp, FastAPI, and Docker drivers
  • Mode Switching -- Define named modes (e.g., "chat", "coding") that bundle services together and switch between them
  • VRAM Simulation -- Simulate VRAM for any configuration before deployment, with alternative suggestions when capacity is exceeded
  • Model Registry -- Track ML models with metadata from HuggingFace Hub or GGUF files, with automatic VRAM estimation
  • MCP Server -- Expose GPU management as an MCP server for Claude Code, Cursor, Claude Desktop, and other MCP-compatible AI assistants
  • Template Engine -- Generate and install systemd unit files from Jinja2 templates, customized per driver type
  • AI Planning -- LLM-assisted VRAM allocation suggestions (advisory only)
  • Interactive TUI -- Terminal dashboard with live GPU status
  • Rich CLI -- Beautiful output with tables, VRAM bar charts, and JSON mode

Installation

Requires uv, Python >= 3.12, Linux with NVIDIA GPU, and nvidia-smi in PATH.

git clone https://github.com/jaigouk/gpumod.git
cd gpumod
uv sync

# Install globally so `gpumod` is always on your PATH
uv tool install -e .

Quick Start

# Initialize database and load presets
gpumod init

# Check GPU status
gpumod status

# List services
gpumod service list

Deploying a Service

gpumod auto-generates systemd unit files from presets — no manual unit files needed.

# Enable user-level systemd lingering (one-time setup)
sudo loginctl enable-linger $USER

# Preview the generated unit file
gpumod template generate vllm-chat

# Install it to ~/.config/systemd/user/
gpumod template install vllm-chat --yes

# Start the service (uses systemctl --user, no sudo needed)
gpumod service start vllm-chat

See the Getting Started guide for full setup instructions.

Mode Switching

Modes bundle services together and fit them within your VRAM budget.

# Simulate VRAM usage before switching
gpumod simulate mode coding-mode

# Switch modes (starts/stops services automatically)
gpumod mode switch coding-mode

# Launch interactive TUI
gpumod tui

MCP Integration

gpumod exposes 16 tools and 8 resources via the Model Context Protocol. Add it to your IDE to let AI assistants query GPU status, simulate VRAM, switch modes, discover models on HuggingFace, and consult an RLM-based reasoning engine for complex questions like "Can I run Qwen3-235B on 24GB?".

{
  "mcpServers": {
    "gpumod": {
      "command": "uv",
      "args": ["--directory", "/path/to/gpumod", "run", "python", "-m", "gpumod.mcp_main"]
    }
  }
}

See MCP Integration for setup instructions for Claude Code, Cursor, Claude Desktop, and Antigravity.

Configuration

All settings are configurable via environment variables with the GPUMOD_ prefix. A .env.example file is included in the repository root — copy it to .env and uncomment the variables you want to override.

Key settings include preflight thresholds (RAM/VRAM), LLM backend configuration, database path, and MCP rate limits. See Configuration for the full list.

Security

Input validation at every boundary, error sanitization, rate limiting, parameterized queries, sandboxed templates, and no shell=True. See Security for the full threat model.

Documentation

Document Description
CLI Reference All commands: status, service, mode, simulate, model, template, plan, tui
MCP Integration MCP server setup for Claude Code, Cursor, Claude Desktop, Antigravity
Configuration Environment variables, LLM backends, settings
AI Planning LLM-assisted VRAM allocation planning
Architecture System design and component overview
Security Threat model, input validation, security controls
Benchmarks LLM benchmark framework and results
Contributing Development setup, tests, code quality, PR process

License

Apache License 2.0. See LICENSE for details.

Copyright 2026 Jaigouk Kim

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpumod-0.1.5.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gpumod-0.1.5-py3-none-any.whl (190.1 kB view details)

Uploaded Python 3

File details

Details for the file gpumod-0.1.5.tar.gz.

File metadata

  • Download URL: gpumod-0.1.5.tar.gz
  • Upload date:
  • Size: 1.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for gpumod-0.1.5.tar.gz
Algorithm Hash digest
SHA256 3e59bd81025d100f58b2f565463292adec4aaca5e2c3fc211f623a350ab7e4f9
MD5 8edea2fb338c95d4184254f722dca5ff
BLAKE2b-256 e1dcb35fadc1816e33badfaf23ee92c8ff3f0c7dfc0a7c00dc8e4a6dc9749200

See more details on using hashes here.

Provenance

The following attestation bundles were made for gpumod-0.1.5.tar.gz:

Publisher: publish.yml on jaigouk/gpumod

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file gpumod-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: gpumod-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 190.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for gpumod-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 be7b2ad78775778d56482a8ab78f42e002b09c2915bcc54c800d1fb145aac14c
MD5 d32b75258d8b28aa3f6e35f38eeefa1e
BLAKE2b-256 577c78f8e55849c89cd29651b7b5a17e3d95cafa1f7e6da7288ff9ac0cc07b10

See more details on using hashes here.

Provenance

The following attestation bundles were made for gpumod-0.1.5-py3-none-any.whl:

Publisher: publish.yml on jaigouk/gpumod

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page