Skip to main content

Pytorch domain library for recommendation systems

Project description

TorchRec (Beta Release)

Docs

TorchRec is a PyTorch domain library built to provide common sparsity & parallelism primitives needed for large-scale recommender systems (RecSys). It allows authors to train models with large embedding tables sharded across many GPUs.

TorchRec contains:

  • Parallelism primitives that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism/model-parallelism.
  • The TorchRec sharder can shard embedding tables with different sharding strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.
  • The TorchRec planner can automatically generate optimized sharding plans for models.
  • Pipelined training overlaps dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.
  • Optimized kernels for RecSys powered by FBGEMM.
  • Quantization support for reduced precision training and inference.
  • Common modules for RecSys.
  • Production-proven model architectures for RecSys.
  • RecSys datasets (criteo click logs and movielens)
  • Examples of end-to-end training such the dlrm event prediction model trained on criteo click logs dataset.

Installation

Torchrec requires Python >= 3.7 and CUDA >= 11.0 (CUDA is highly recommended for performance but not required). The example below shows how to install with CUDA 11.3. This setup assumes you have conda installed.

Binaries

Experimental binary on Linux for Python 3.7, 3.8 and 3.9 can be installed via pip wheels

CUDA

conda install pytorch cudatoolkit=11.3 -c pytorch-nightly
pip install torchrec-nightly

CPU Only

conda install pytorch cpuonly -c pytorch-nightly
pip install torchrec-nightly-cpu

Colab example: introduction + install

See our colab notebook for an introduction to torchrec which includes runnable installation. - Tutorial Source - Open in Google Colab

From Source

We are currently iterating on the setup experience. For now, we provide manual instructions on how to build from source. The example below shows how to install with CUDA 11.3. This setup assumes you have conda installed.

  1. Install pytorch. See pytorch documentation

    conda install pytorch cudatoolkit=11.3 -c pytorch-nightly
    
  2. Install Requirements

    pip install -r requirements.txt
    
  3. Next, install FBGEMM_GPU from source (included in third_party folder of torchrec) by following the directions here. Installing fbgemm GPU is optional, but using FBGEMM w/ CUDA will be much faster. For CUDA 11.3 and SM80 (Ampere) architecture, the following instructions can be used:

    export CUB_DIR=/usr/local/cuda-11.3/include/cub
    export CUDA_BIN_PATH=/usr/local/cuda-11.3/
    export CUDACXX=/usr/local/cuda-11.3/bin/nvcc
    python setup.py install -DTORCH_CUDA_ARCH_LIST="7.0;8.0"
    

    The last line of the above code block (python setup.py install...) which manually installs fbgemm_gpu can be skipped if you do not need to build fbgemm_gpu with custom build-related flags. Skip to the next step if that is the case.

  4. Download and install TorchRec.

    git clone --recursive https://github.com/pytorch/torchrec
    
    # cd to the directory where torchrec's setup.py is located. Then run one of the below:
    cd torchrec
    python setup.py install develop --skip_fbgemm  # If you manually installed fbgemm_gpu in the previous step.
    python setup.py install develop                # Otherwise. This will run the fbgemm_gpu install step for you behind the scenes.
    python setup.py install develop --cpu_only     # For a CPU only installation of FBGEMM
    
  5. Test the installation.

    GPU mode
    
    torchx run -s local_cwd dist.ddp -j 1x2 --script test_installation.py
    
    CPU Mode
    
    torchx run -s local_cwd dist.ddp -j 1x2 --script test_installation.py -- --cpu_only
    

    See TorchX for more information on launching distributed and remote jobs.

  6. If you want to run a more complex example, please take a look at the torchrec DLRM example.

License

TorchRec is BSD licensed, as found in the LICENSE file.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

torchrec_nightly-2022.4.26-py39-none-any.whl (92.8 MB view details)

Uploaded Python 3.9

torchrec_nightly-2022.4.26-py38-none-any.whl (92.8 MB view details)

Uploaded Python 3.8

torchrec_nightly-2022.4.26-py37-none-any.whl (92.8 MB view details)

Uploaded Python 3.7

File details

Details for the file torchrec_nightly-2022.4.26-py39-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly-2022.4.26-py39-none-any.whl
Algorithm Hash digest
SHA256 ebb24071c81b87750380c1df8b10f120c8c7b91a7ce79840617a74887a660e07
MD5 cdfa909fdfbdbf57c864f7ae283075a1
BLAKE2b-256 3d0bd44fe28231f6b3f43f8526d3bb3bcdff8a600d5fdb516c3601116c37ce71

See more details on using hashes here.

File details

Details for the file torchrec_nightly-2022.4.26-py38-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly-2022.4.26-py38-none-any.whl
Algorithm Hash digest
SHA256 622fbae1652678adf0a870340e326286eda3a70d91b25e13ea2a65a8cbd58603
MD5 acd217a9f1c6e191b987ba0204537946
BLAKE2b-256 f6bb171ab7d7d880b79dbd20c95a51637076f3c82c848abbd828d19a42ac1111

See more details on using hashes here.

File details

Details for the file torchrec_nightly-2022.4.26-py37-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly-2022.4.26-py37-none-any.whl
Algorithm Hash digest
SHA256 7670af67375fd6c47cb32cbc7ef7961a272ac814c6d460d69d7499dd7b7e7caf
MD5 a237563c565ecfd10351c2024b665313
BLAKE2b-256 21101e7007b9779378a927f0d1d47471b34cbd1ab418a14d7c4eb15c557f2340

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page