Skip to main content

A lightweight scheduler reading nvidia-smi and updating torch environment variables to run on the recommended GPU.

Project description

pytorch_run_on_recommended_gpu

A lightweight script that interactively updates CUDA_VISIBLE_DEVICES for pytorch

Install

pip install pytorch_run_on_recommended_cuda

Usage from CLI

Perform a dry run

pytorch_run_on_recommended_cuda

Run a script and select a GPU manually

pytorch_run_on_recommended_cuda

Run a script from the next available GPU

pytorch_run_on_recommended_cuda --select * <path_to_script>

Run a script from the next two available GPUs

pytorch_run_on_recommended_cuda --select ** <path_to_script>

Run a script from GPU ids 6 and 7

pytorch_run_on_recommended_cuda --select 6 7 <path_to_script>

Usage from .py file

import os
from pytorch_run_on_recommended_gpu.run_on_recommended_gpu import get_cuda_environ_vars as get_vars

os.environ.update(get_vars('*')
print(get_vars('*')))
import torch # Import torch after you have updated the vars.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytorch_run_on_recommended_gpu-1.0.2.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page