Skip to main content

NVIDIA GPU tools

Project description

It provides information about GPUs and their availability for computation.

Often we want to train a ML model on one of GPUs installed on a multi-GPU machine. Since TensorFlow allocates all memory, only one such process can use the GPU at a time. Unfortunately nvidia-smi provides only a text interface with information about GPUs. This packages wraps it with an easier to use CLI and Python interface.

It’s a quick and dirty solution calling nvidia-smi and parsing its output. We can take one or more GPUs availabile for computation based on relative memory usage, ie. it is OK with Xorg taking a few MB.

In addition we have a fancy table of GPU with more information taken by python binding to NVML.

Installing

pip install nvgpu

Usage examples

Command-line interface:

# grab all available GPUs
CUDA_VISIBLE_DEVICES=$(nvgpu available)

# grab at most available GPU
CUDA_VISIBLE_DEVICES=$(nvgpu available -l 1)

Print pretty colored table of devices, availability, users, processes:

$ nvgpu list
    status    type                 util.      temp.    MHz  users    since    pids    cmd
--  --------  -------------------  -------  -------  -----  -------  ---------------  ------  --------
 0  [ ]       GeForce GTX 1070      0 %          44    139
 1  [~]       GeForce GTX 1080 Ti   0 %          44    139  alice    2 days ago       19028   jupyter
 2  [~]       GeForce GTX 1080 Ti   0 %          44    139  bob      14 hours ago     8479    jupyter
 3  [~]       GeForce GTX 1070     46 %          54   1506  bob      7 days ago       20883   train.py
 4  [~]       GeForce GTX 1070     35 %          64   1480  bob      7 days ago       26228   evaluate.py
 5  [!]       GeForce GTX 1080 Ti   0 %          44    139  ?                         9305
 6  [ ]       GeForce GTX 1080 Ti   0 %          44    139

Or shortcut:

$ nvl

Python API:

import nvgpu

nvgpu.available_gpus()
# ['0', '2']

nvgpu.gpu_info()
[{'index': '0',
  'mem_total': 8119,
  'mem_used': 7881,
  'mem_used_percent': 97.06860450794433,
  'type': 'GeForce GTX 1070',
  'uuid': 'GPU-3aa99ee6-4a9f-470e-3798-70aaed942689'},
 {'index': '1',
  'mem_total': 11178,
  'mem_used': 10795,
  'mem_used_percent': 96.57362676686348,
  'type': 'GeForce GTX 1080 Ti',
  'uuid': 'GPU-60410ded-5218-7b06-9c7a-124b77a22447'},
 {'index': '2',
  'mem_total': 11178,
  'mem_used': 10789,
  'mem_used_percent': 96.51994990159241,
  'type': 'GeForce GTX 1080 Ti',
  'uuid': 'GPU-d0a77bd4-cc70-ca82-54d6-4e2018cfdca6'},
  ...
]

Author

TODO

  • order GPUs by priority (decreasing power, decreasing free memory)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nvgpu-0.5.1.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

nvgpu-0.5.1-py2.py3-none-any.whl (5.5 kB view details)

Uploaded Python 2Python 3

File details

Details for the file nvgpu-0.5.1.tar.gz.

File metadata

  • Download URL: nvgpu-0.5.1.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for nvgpu-0.5.1.tar.gz
Algorithm Hash digest
SHA256 a34d8e3ea0dd200924ab9fb27a9256b0e705937d6f867ece89e6aee6656fc8a5
MD5 d67f226f688c85059455a017e537ab4c
BLAKE2b-256 0de71beebc4b775701cbbbe8b75d47652425f91012b922fc5395fe246ab40209

See more details on using hashes here.

File details

Details for the file nvgpu-0.5.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for nvgpu-0.5.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 06b6cb8f12f51ced44a478f1aabf2d70ed38c70ecc8200002e471a1547782c05
MD5 0222cef1fc303f641b11fdef610bb4c0
BLAKE2b-256 bbfb6a4d4988c6af702c504d96934364660a98e4eb462c5165e5a128ec91e0d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page