Skip to main content

Keep GPU is a simple CLI app that keeps your GPUs running

Project description

Keep GPU

PyPI Version Docs Status DOI

Keep GPU is a simple CLI app that keeps your GPUs running.


Contributions Welcome!

If you have ideas for new features or improvements, feel free to open an issue or submit a pull request.

This project does not yet fully support ROCm GPUs, so any contributions, suggestions, or testing help in that area are especially welcome!


Features

  • Simple command-line interface
  • Uses PyTorch and nvidia-smi to monitor and load GPUs
  • Easy to extend for your own keep-alive logic

Installation

pip install keep-gpu

Usage

Use keep-gpu as a cli tool

keep-gpu

Specify the interval in microseconds between GPU usage checks (default is 300 seconds):

keep-gpu --interval 100

Specify GPU IDs to run on (default is all available GPUs):

keep-gpu --gpu-ids 0,1,2

Use keep-gpu api in your code

Non-blocking gpu keeping logic with CudaGPUController:

from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
ctrl = CudaGPUController(rank=0, interval=0.5)
# occupy GPU while you do CPU-only work
# this is non-blocking
ctrl.keep()
dataset.process()
ctrl.release()        # give GPU memory back
model.train_start()   # now run real GPU training

Use CudaGPUController as a context manager:

from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
with CudaGPUController(rank=0, interval=0.5):
    dataset.process()  # GPU occupied inside this block
model.train_start()    # GPU free after exiting block

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Contributors

📖 Citation

If you find KeepGPU useful in your research or work, please cite it as:

@software{Wangmerlyn_KeepGPU_2025,
  author       = {Wang, Siyuan and Shi, Yaorui and Liu, Yida and Yin, Yuqi},
  title        = {KeepGPU: a simple CLI app that keeps your GPUs running},
  year         = {2025},
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.17129114},
  url          = {https://github.com/Wangmerlyn/KeepGPU},
  note         = {GitHub repository},
  keywords     = {ai, hpc, gpu, cluster, cuda, torch, debug}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keep_gpu-0.3.2.tar.gz (16.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

keep_gpu-0.3.2-py3-none-any.whl (15.6 kB view details)

Uploaded Python 3

File details

Details for the file keep_gpu-0.3.2.tar.gz.

File metadata

  • Download URL: keep_gpu-0.3.2.tar.gz
  • Upload date:
  • Size: 16.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for keep_gpu-0.3.2.tar.gz
Algorithm Hash digest
SHA256 f4e00b71f60f295b056f1fb2954990cbc210d5df1d227cce5986990d926fe1ea
MD5 2a22f93720982ad42d56f9ed27a25243
BLAKE2b-256 13c0673786b66cd237e565fb307325f0aeecd1584a8997f73b7c7ef1cb8badd6

See more details on using hashes here.

File details

Details for the file keep_gpu-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: keep_gpu-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 15.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for keep_gpu-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 01ec3bb8fc8e1664781b81e270db6b2c124647d14c1bc3144cf76ceffdfa9b2c
MD5 2da00bed97f7e68b9e514fff24fbd1ff
BLAKE2b-256 95a36ac533b42bb83ca7656cad7bd9e421695c7d362ca9154a844e7131511fb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page