Skip to main content

No project description provided

Project description

FBGEMM_GPU

FBGEMMCI Nightly Build Nightly Build CPU

FBGEMM_GPU (FBGEMM GPU kernel library) is a collection of high-performance CUDA GPU operator library for GPU training and inference.

The library provides efficient table batched embedding bag, data layout transformation, and quantization supports.

Currently tested with CUDA 11.3, 11.5, 11.6, and 11.7 in CI. In all cases, we test with PyTorch packages which are built with CUDA 11.7.

Only Intel/AMD CPUs with AVX2 extensions are currently supported.

General build and install instructions are as follows:

Build dependencies: scikit-build, cmake, ninja, jinja2, torch, cudatoolkit, and for testing: hypothesis.

conda install scikit-build jinja2 ninja cmake hypothesis

If you're planning to build from source and don't have nvml.h in your system, you can install it via the command below.

conda install -c conda-forge cudatoolkit-dev

Certain operations require this library to be present. Be sure to provide the path to libnvidia-ml.so to --nvml_lib_path if installing from source (e.g. python setup.py install --nvml_lib_path path_to_libnvidia-ml.so).

PIP install

Currently only built with sm70/80 (V100/A100 GPU) wheel supports:

# Release GPU
conda install pytorch cudatoolkit=11.3 -c pytorch
pip install fbgemm-gpu

# Release CPU-only
conda install pytorch cudatoolkit=11.3 -c pytorch
pip install fbgemm-gpu-cpu

# Nightly GPU
conda install pytorch pytorch-cuda=11.7 -c pytorch-nightly -c nvidia
pip install fbgemm-gpu-nightly

# Nightly CPU-only
pip install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
pip install fbgemm-gpu-nightly-cpu

Build from source

Additional dependencies: currently cuDNN is required to be installed. Please download and follow instructions here to install cuDNN.

# Requires PyTorch 1.13 or later
conda install pytorch pytorch-cuda=11.7 -c pytorch-nightly -c nvidia
git clone --recursive https://github.com/pytorch/FBGEMM.git
cd FBGEMM/fbgemm_gpu
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

# Specify CUDA version to use
# (may not be needed with only a single version installed)
export CUDA_BIN_PATH=/usr/local/cuda-11.3/
export CUDACXX=/usr/local/cuda-11.3/bin/nvcc

# Specify cuDNN library and header paths.  We tested CUDA 11.6 and 11.7 with
# cuDNN version 8.5.0.96
export CUDNN_LIBRARY=${HOME}/cudnn-linux-x86_64-8.5.0.96_cuda11-archive/lib
export CUDNN_INCLUDE_DIR=${HOME}/cudnn-linux-x86_64-8.5.0.96_cuda11-archive/include

# if using CUDA 10 or earliers set the location to the CUB installation directory
export CUB_DIR=${CUB_DIR}
# in fbgemm_gpu folder
# build for the CUDA architecture supported by current system (or all architectures if no CUDA device present)
python setup.py install
# or build it for specific CUDA architectures (see PyTorch documentation for usage of TORCH_CUDA_ARCH_LIST)
python setup.py install -DTORCH_CUDA_ARCH_LIST="7.0;8.0"

Usage Example:

cd bench
python split_table_batched_embeddings_benchmark.py uvm

Issues

Building is CMAKE based and keeps state across install runs. Specifying the CUDA architectures in the command line once is enough. However on failed builds (missing dependencies ..) this can cause problems and using

python setup.py clean

to remove stale cached state can be helpfull.

Examples

The tests (in test folder) and benchmarks (in bench folder) are some great examples of using FBGEMM_GPU.

Build Notes

FBGEMM_GPU uses a scikit-build CMAKE-based build flow.

Dependencies

FBGEMM_GPU requires nvcc and a Nvidia GPU with compute capability of 3.5+.

  • CUB

CUB is now included with CUDA 11.1+ - the section below will still be needed for lower CUDA versions (once they are tested).

For the CUB build time dependency, if you are using conda, you can continue with

conda install -c bottler nvidiacub

Otherwise download the CUB library from https://github.com/NVIDIA/cub/releases and unpack it to a folder of your choice. Define the environment variable CUB_DIR before building and point it to the directory that contains CMakeLists.txt for CUB. For example on Linux/Mac,

curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_DIR=$PWD/cub-1.10.0
  • PyTorch, Jinja2, scikit-build

PyTorch, Jinja2 and scikit-build are required to build and run the table batched embedding bag operator. One thing to note is that the implementation of this op relies on the version of PyTorch 1.9 or later.

conda install scikit-build jinja2 ninja cmake

Running FBGEMM_GPU

To run the tests or benchmarks after building FBGEMM_GPU (if tests or benchmarks are built), use the following command:

# run the tests and benchmarks of table batched embedding bag op,
# data layout transform op, quantized ops, etc.
cd test
python split_table_batched_embeddings_test.py
python quantize_ops_test.py
python sparse_ops_test.py
python split_embedding_inference_converter_test.py
cd ../bench
python split_table_batched_embeddings_benchmark.py

To run the tests and benchmarks on a GPU-capable device in CPU-only mode use CUDA_VISIBLE_DEVICES=-1

CUDA_VISIBLE_DEVICES=-1 python split_table_batched_embeddings_test.py

How FBGEMM_GPU works

For a high-level overview, design philosophy and brief descriptions of various parts of FBGEMM_GPU please see our Wiki (work in progress).

Full documentation

We have extensively used comments in our source files. The best and up-do-date documentation is available in the source files.

Building API Documentation

See docs/README.md.

Join the FBGEMM community

See the CONTRIBUTING file for how to help out.

License

FBGEMM is BSD licensed, as found in the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

fbgemm_gpu-0.3.0-cp310-cp310-manylinux1_x86_64.whl (222.1 MB view details)

Uploaded CPython 3.10

fbgemm_gpu-0.3.0-cp39-cp39-manylinux1_x86_64.whl (222.1 MB view details)

Uploaded CPython 3.9

fbgemm_gpu-0.3.0-cp38-cp38-manylinux1_x86_64.whl (222.1 MB view details)

Uploaded CPython 3.8

fbgemm_gpu-0.3.0-cp37-cp37m-manylinux1_x86_64.whl (222.1 MB view details)

Uploaded CPython 3.7m

File details

Details for the file fbgemm_gpu-0.3.0-cp310-cp310-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for fbgemm_gpu-0.3.0-cp310-cp310-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 5773fbbe2fdce3d4f0e28fb857cb9c1a04fc0c090b387f6f60fab873758a6ea5
MD5 e0293fb23313a116a0068e0073edf538
BLAKE2b-256 d789c5c72521155ea309cab78b50ca8676df1848b350db39129abcd5cdedd16f

See more details on using hashes here.

File details

Details for the file fbgemm_gpu-0.3.0-cp39-cp39-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for fbgemm_gpu-0.3.0-cp39-cp39-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 2fed9c755a04851d9ef4c34407e49daa4f9e4da1bf22f5074c2a1c28c52cc4c9
MD5 96a8510609d879a8731e82d7e7acade3
BLAKE2b-256 430879a5006f3deacae75800dd710bbb028d1b2cbdb6e21f5bcd7e1f6d28d527

See more details on using hashes here.

File details

Details for the file fbgemm_gpu-0.3.0-cp38-cp38-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for fbgemm_gpu-0.3.0-cp38-cp38-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 efcac2628e3542ddba752176ad397edf4946816cb6a351e0c35a228d4d9e8f5f
MD5 2de1c593296a666439767a57893cf0bb
BLAKE2b-256 db920c9d210095a5d9d2d0b89a55f8251013558126be713df2076ab059c6a537

See more details on using hashes here.

File details

Details for the file fbgemm_gpu-0.3.0-cp37-cp37m-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for fbgemm_gpu-0.3.0-cp37-cp37m-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 5b19534ae38097104c6c76c07e5f394f41af71604be6439a379821420fe902bb
MD5 8033d5953b37e6e623813f153ff6136b
BLAKE2b-256 c1e83c5671ecf9012c134308054b6aca63f31225cb1bfd4d6c3c6188127da941

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page