Keep GPU is a simple CLI app that keeps your GPUs running
Project description
Keep GPU
Keep GPU is a simple CLI app that keeps your GPUs running.
- 🧾 License: MIT
- 📚 Documentation: https://keepgpu.readthedocs.io
Contributions Welcome!
If you have ideas for new features or improvements, feel free to open an issue or submit a pull request.
This project does not yet fully support ROCm GPUs, so any contributions, suggestions, or testing help in that area are especially welcome!
Features
- Simple command-line interface
- Uses PyTorch and
nvidia-smito monitor and load GPUs - Easy to extend for your own keep-alive logic
Installation
pip install keep-gpu
Usage
Use keep-gpu as a cli tool
keep-gpu
Specify the interval in microseconds between GPU usage checks (default is 300 seconds):
keep-gpu --interval 100
Specify GPU IDs to run on (default is all available GPUs):
keep-gpu --gpu-ids 0,1,2
Use keep-gpu api in your code
Non-blocking gpu keeping logic with CudaGPUController:
from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
ctrl = CudaGPUController(rank=0, interval=0.5)
# occupy GPU while you do CPU-only work
# this is non-blocking
ctrl.keep()
dataset.process()
ctrl.release() # give GPU memory back
model.train_start() # now run real GPU training
Use CudaGPUController as a context manager:
from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
with CudaGPUController(rank=0, interval=0.5):
dataset.process() # GPU occupied inside this block
model.train_start() # GPU free after exiting block
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
Contributors
📖 Citation
If you find KeepGPU useful in your research or work, please cite it as:
@software{Wangmerlyn_KeepGPU_2025,
author = {Wang, Siyuan and Shi, Yaorui and Liu, Yida and Yin, Yuqi},
title = {KeepGPU: a simple CLI app that keeps your GPUs running},
year = {2025},
publisher = {Zenodo},
doi = {10.5281/zenodo.17129114},
url = {https://github.com/Wangmerlyn/KeepGPU},
note = {GitHub repository},
keywords = {ai, hpc, gpu, cluster, cuda, torch, debug}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file keep_gpu-0.3.2.tar.gz.
File metadata
- Download URL: keep_gpu-0.3.2.tar.gz
- Upload date:
- Size: 16.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f4e00b71f60f295b056f1fb2954990cbc210d5df1d227cce5986990d926fe1ea
|
|
| MD5 |
2a22f93720982ad42d56f9ed27a25243
|
|
| BLAKE2b-256 |
13c0673786b66cd237e565fb307325f0aeecd1584a8997f73b7c7ef1cb8badd6
|
File details
Details for the file keep_gpu-0.3.2-py3-none-any.whl.
File metadata
- Download URL: keep_gpu-0.3.2-py3-none-any.whl
- Upload date:
- Size: 15.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01ec3bb8fc8e1664781b81e270db6b2c124647d14c1bc3144cf76ceffdfa9b2c
|
|
| MD5 |
2da00bed97f7e68b9e514fff24fbd1ff
|
|
| BLAKE2b-256 |
95a36ac533b42bb83ca7656cad7bd9e421695c7d362ca9154a844e7131511fb5
|