Skip to main content

gpucrate creates hard-linked GPU driver volumes for use with docker, singularity, etc.

Project description

# gpucrate

[![build status](https://secure.travis-ci.org/jtriley/gpucrate.png?branch=master)](https://secure.travis-ci.org/jtriley/gpucrate)

gpucrate creates hard-linked GPU driver (currently just NVIDIA) volumes for use with docker, singularity, etc. This allows the exact system drivers to be linked into a container without needing to maintain a separate container per driver version.

## Installation To install gpucrate use the pip command:

` $ pip install gpucrate `

or in a [virtual environment](https://virtualenv.pypa.io/en/stable/):

` $ virtualenv gpucrate $ source gpucrate/bin/activate $ pip install gpucrate `

## Usage To create a driver volume for your system’s current GPU driver:

` $ sudo gpucrate create `

This will create a hard-linked driver volume directory in /usr/local/gpucrate by default that can be used to link the drivers into a container. Here’s an example volume for driver version 367.48:

` $ find /usr/local/gpucrate/367.48/ /usr/local/gpucrate/367.48/ /usr/local/gpucrate/367.48/bin /usr/local/gpucrate/367.48/bin/nvidia-cuda-mps-server /usr/local/gpucrate/367.48/bin/nvidia-debugdump /usr/local/gpucrate/367.48/bin/nvidia-persistenced /usr/local/gpucrate/367.48/bin/nvidia-cuda-mps-control /usr/local/gpucrate/367.48/bin/nvidia-smi /usr/local/gpucrate/367.48/lib /usr/local/gpucrate/367.48/lib64 /usr/local/gpucrate/367.48/lib64/libnvcuvid.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-ml.so.1 /usr/local/gpucrate/367.48/lib64/libnvidia-eglcore.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-glcore.so.367.48 /usr/local/gpucrate/367.48/lib64/libcuda.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-opencl.so.1 /usr/local/gpucrate/367.48/lib64/libnvcuvid.so.1 /usr/local/gpucrate/367.48/lib64/libnvidia-ifr.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-ml.so.367.48 /usr/local/gpucrate/367.48/lib64/libcuda.so.1 /usr/local/gpucrate/367.48/lib64/libnvidia-encode.so.1 /usr/local/gpucrate/367.48/lib64/libnvidia-tls.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-egl-wayland.so.367.48 /usr/local/gpucrate/367.48/lib64/libOpenGL.so.0 /usr/local/gpucrate/367.48/lib64/libcuda.so /usr/local/gpucrate/367.48/lib64/libnvidia-compiler.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-fatbinaryloader.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-opencl.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-ptxjitcompiler.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-fbc.so.1 /usr/local/gpucrate/367.48/lib64/libnvidia-fbc.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-glsi.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-encode.so.367.48 /usr/local/gpucrate/367.48/lib64/libnvidia-ifr.so.1 `

By default gpucrate creates driver volumes in /usr/local/gpucrate. You can change this via gpucrate’s config file:

` echo 'volume_root: /path/to/volume/root' > /etc/gpucrate/config.yaml `

or via the GPUCRATE_VOLUME_ROOT environment variable:

` export GPUCRATE_VOLUME_ROOT="/path/to/volume/root" `

### Using with Singularity NOTE: singularity-gpu requires Singularity 2.4+

Once a volume has been created for the currently active driver you can now use the singularity wrapper singularity-gpu to run GPU-enabled containers.

As an example lets convert the [tensorflow/tensorflow:latest-gpu](https://hub.docker.com/r/tensorflow/tensorflow/) docker image to a singularity image:

` $ singularity build tensorflow.img docker://tensorflow/tensorflow:latest-gpu `

Now use the singularity-gpu wrapper to run any singularity command as normal only with the host’s exact GPU driver linked in:

` $ singularity-gpu exec tensorflow.img python -c 'import tensorflow' I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally `

By default singularity-gpu injects the required environment for NVIDIA/CUDA inside the container at run time. If this causes issues or you’d like to disable this for any reason set the following in the gpucrate config file:

` echo 'manage_environment: false' > /etc/gpucrate/config.yaml `

or use the GPUCRATE_MANAGE_ENVIRONMENT environment variable:

` export GPUCRATE_MANAGE_ENVIRONMENT="false" `

#### Container Requirements The singularity-gpu wrapper uses the same conventions as NVIDIA’s upstream docker containers:

  1. NVIDIA driver volume binds to /usr/local/nvidia inside the container

  2. CUDA lives in /usr/local/cuda

If you have enable overlay no in your singularity config you’ll need to ensure that /usr/local/nvidia exists inside the container before attempting to use singularity-gpu.

### Using with Docker It’s much easier to just use [nvidia-docker](https://github.com/NVIDIA/nvidia-docker). If you still insist try this (not tested and you’ll need to adjust the devices, volume root, and driver version for your system):

` $ docker run -ti --rm \ --device=/dev/nvidiactl \ --device=/dev/nvidia-uvm \ --device=/dev/nvidia0 \ --device=/dev/nvidia1 \ --device=/dev/nvidia2 --device=/dev/nvidia3 \ --volume-driver=nvidia-docker \ --volume=/usr/local/gpucrate/<driver_version>:/usr/local/nvidia:ro nvidia/cuda \ nvidia-smi `

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpucrate-0.1.0.tar.gz (27.4 kB view details)

Uploaded Source

Built Distribution

gpucrate-0.1.0-py2-none-any.whl (34.2 kB view details)

Uploaded Python 2

File details

Details for the file gpucrate-0.1.0.tar.gz.

File metadata

  • Download URL: gpucrate-0.1.0.tar.gz
  • Upload date:
  • Size: 27.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for gpucrate-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d198b4a0128f77e4c62953825f63bfb95480d89ec644c056b476f50bae2334ba
MD5 57edffc9779eb88489c24a5c4953353a
BLAKE2b-256 0d08d1d1c7682e719b71a7a84b1aeaed441d547cf9997a3f203f1f717e56a5ad

See more details on using hashes here.

File details

Details for the file gpucrate-0.1.0-py2-none-any.whl.

File metadata

File hashes

Hashes for gpucrate-0.1.0-py2-none-any.whl
Algorithm Hash digest
SHA256 03fb9a90b351a2b2608857a99c04e0dfe39c2851340f93f3fabb923c6a103a47
MD5 336f91b97e7646e4529272759f612690
BLAKE2b-256 da8f43eb2999fdd3844d41575de063340537ca7d1a4118429bcc22a68e877718

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page