Skip to main content

Keep GPU is a simple CLI app that keeps your GPUs running

Project description

Keep GPU

PyPI Version Docs Status DOI Ask DeepWiki CodeRabbit Pull Request Reviews SkillCheck Passed

Keep GPU keeps shared GPUs from being reclaimed while you prep data, debug, or coordinate multi-stage pipelines. It allocates just enough VRAM and issues lightweight CUDA work so schedulers observe an “active” device—without running a full training job.

Why it exists

On many clusters, idle GPUs are reaped or silently shared after a short grace period. The cost of losing your reservation (or discovering another job has taken your card) can dwarf the cost of a tiny keep-alive loop. KeepGPU is a minimal, auditable guardrail:

  • Predictable – Single-purpose controller with explicit resource knobs (VRAM size, interval, utilization backoff).
  • Polite – Uses NVML to read utilization and backs off when the GPU is busy.
  • Portable – Typer/Rich CLI for humans; Python API for orchestrators and notebooks.
  • Observable – Structured logging and optional file logs for auditing what kept the GPU alive.
  • Power-aware – Uses intervalled elementwise ops instead of heavy matmul floods to present “busy” utilization while keeping power and thermals lower (see CudaGPUController._run_relu_batch for the loop).
  • NVML-backed – GPU telemetry comes from nvidia-ml-py (the pynvml module), with optional rocm-smi support when you install the rocm extra.

Quick start (CLI)

pip install keep-gpu

# Hold GPU 0 with 1 GiB VRAM and throttle if utilization exceeds 25%
keep-gpu --gpu-ids 0 --vram 1GiB --busy-threshold 25 --interval 60

# Non-blocking mode for agent workflows (auto-starts local service)
keep-gpu start --gpu-ids 0 --vram 1GiB --busy-threshold 25 --interval 60
keep-gpu status
keep-gpu stop --all
keep-gpu service-stop

Open the dashboard while service mode is running:

http://127.0.0.1:8765/

Platform installs at a glance

  • CUDA (example: cu121)
    pip install --index-url https://download.pytorch.org/whl/cu121 torch
    pip install keep-gpu
    
  • ROCm (example: rocm6.1)
    pip install --index-url https://download.pytorch.org/whl/rocm6.1 torch
    pip install keep-gpu[rocm]
    
  • CPU-only
    pip install torch
    pip install keep-gpu
    

Flags that matter:

  • Blocking mode knobs:
    • --vram (1GiB, 750MB, or bytes): how much memory to pin.
    • --interval (seconds): sleep between keep-alive bursts.
    • --busy-threshold: skip work when NVML reports higher utilization.
    • --gpu-ids: target a subset; otherwise all visible GPUs are guarded.
  • Service mode commands:
    • keep-gpu serve: run local service (HTTP + dashboard).
    • keep-gpu start: create keep session and return immediately.
    • keep-gpu status: inspect active sessions.
    • keep-gpu stop --job-id <id> or keep-gpu stop --all: release sessions.
    • keep-gpu service-stop: stop auto-started local daemon.
    • keep-gpu list-gpus: fetch telemetry from local service.

Embed in Python

from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController

with CudaGPUController(rank=0, interval=0.5, vram_to_keep="1GiB", busy_threshold=20):
    preprocess_dataset()   # GPU is marked busy while you run CPU-heavy code

train_model()              # GPU freed after exiting the context

Need multiple devices?

from keep_gpu.global_gpu_controller.global_gpu_controller import GlobalGPUController

with GlobalGPUController(gpu_ids=[0, 1], vram_to_keep="750MB", interval=90, busy_threshold=30):
    run_pipeline_stage()

What you get

  • Battle-tested keep-alive loop built on PyTorch.
  • NVML-based utilization monitoring (by way of nvidia-ml-py) to avoid hogging busy GPUs; optional ROCm SMI support by way of pip install keep-gpu[rocm].
  • CLI + API parity: same controllers power both code paths.
  • Continuous docs + CI: mkdocs + mkdocstrings build in CI to keep guidance up to date.

For developers

  • Install dev extras: pip install -e ".[dev]" (add .[rocm] if you need ROCm SMI).
  • Fast CUDA checks: pytest tests/cuda_controller tests/global_controller tests/utilities/test_platform_manager.py tests/test_cli_thresholds.py
  • ROCm-only tests carry @pytest.mark.rocm; run with pytest --run-rocm tests/rocm_controller.
  • Markers: rocm (needs ROCm stack) and large_memory (opt-in locally).

MCP and service API

  • Start a simple JSON-RPC server on stdin/stdout (default):
    keep-gpu-mcp-server
    
  • Or expose it over HTTP (JSON-RPC + REST + dashboard):
    keep-gpu-mcp-server --mode http --host 0.0.0.0 --port 8765
    
  • JSON-RPC request example:
    {"id": 1, "method": "start_keep", "params": {"gpu_ids": [0], "vram": "512MB", "interval": 60, "busy_threshold": 20}}
    
  • REST examples:
    curl http://127.0.0.1:8765/health
    curl http://127.0.0.1:8765/api/sessions
    
  • Methods: start_keep, stop_keep (optional job_id, default stops all), status (optional job_id), list_gpus (basic info).
  • Dashboard: http://127.0.0.1:8765/
  • Minimal client config (stdio MCP):
    servers:
      keepgpu:
        command: ["keep-gpu-mcp-server"]
        adapter: stdio
    
  • Minimal client config (HTTP MCP):
    servers:
      keepgpu:
        url: http://127.0.0.1:8765/
        adapter: http
    
  • Remote/SSH tunnel example (HTTP):
    keep-gpu-mcp-server --mode http --host 0.0.0.0 --port 8765
    
    Client config (replace hostname/tunnel as needed):
    servers:
      keepgpu:
        url: http://gpu-box.example.com:8765/
        adapter: http
    
    For untrusted networks, put the server behind your own auth/reverse-proxy or tunnel by way of SSH (for example, ssh -L 8765:localhost:8765 gpu-box).

Contributing

Contributions are welcome—especially around ROCm support, platform fallbacks, and scheduler-specific recipes. Open an issue or PR if you hit edge cases on your cluster. See docs/contributing.md for dev setup, test commands, and PR tips.

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Contributors

📖 Citation

If you find KeepGPU useful in your research or work, please cite it as:

@software{Wangmerlyn_KeepGPU_2025,
  author       = {Wang, Siyuan and Shi, Yaorui and Liu, Yida and Yin, Yuqi},
  title        = {KeepGPU: a simple CLI app that keeps your GPUs running},
  year         = {2025},
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.17129114},
  url          = {https://github.com/Wangmerlyn/KeepGPU},
  note         = {GitHub repository},
  keywords     = {ai, hpc, gpu, cluster, cuda, torch, debug}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keep_gpu-0.5.0.tar.gz (88.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

keep_gpu-0.5.0-py3-none-any.whl (81.2 kB view details)

Uploaded Python 3

File details

Details for the file keep_gpu-0.5.0.tar.gz.

File metadata

  • Download URL: keep_gpu-0.5.0.tar.gz
  • Upload date:
  • Size: 88.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for keep_gpu-0.5.0.tar.gz
Algorithm Hash digest
SHA256 20a7c42cf158a7803b55ec8837dfddd101e9aa54eeed6649d87da09e2b819caf
MD5 cf676e513f8ece4db4a7f47b146c16bf
BLAKE2b-256 1c390064d67716d366cf3ac59b4e00b145beb35202ca7560293606e27c737c2e

See more details on using hashes here.

File details

Details for the file keep_gpu-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: keep_gpu-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 81.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for keep_gpu-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 08b2180d6b86cb0174b435fbd1b90aa2e6bcb747a699782bfba7f823d988b40a
MD5 9b8f4fe2ea66ac31da8e11d05207be68
BLAKE2b-256 cb018a135167d92e3bdd43fa550671e9ee2022ee58c0f01f06ae62ee23bb1f2c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page