Skip to main content
Help the Python Software Foundation raise $60,000 USD by December 31st!  Building the PSF Q4 Fundraiser

Low-Precision Arithmetic Simulation in Pytorch

Project description

QPyTorch

Downloads License: MIT

News:

  • Updated to version 0.2.0:
    • Bug fixed: previously in our floating point quantization, numbers that are closer to 0 than the smallest representable positive number rounded to the smallest rep positive number. Now we round to 0 or the smallest representable number based on which one is the nearest
    • Different Behavior: To be consistent with PyTorch Issue #17443, we round the nearest even now.
    • We migrate to PyTorch 1.5.0. There are several changes in the C++ API of PyTorch. This new version is not backward-compatible with older PyTorch.
    • Note: if you are using CUDA 10.1, please install CUDA 10.1 Update 1 (or later version). There is a bug in the first version of CUDA 10.1 which leads to compilation error.
    • Note: previous users, please remove the cache in the pytorch extension directory. For example, you can run this command rm -rf /tmp/torch_extensions/quant_cuda /tmp/torch_extensions/quant_cuda if you are using the default directory for pytorch extensions.

QPyTorch is a low-precision arithmetic simulation package in PyTorch. It is designed to support researches on low-precision machine learning, especially for researches in low-precision training.

Notably, QPyTorch supports quantizing different numbers in the training process with customized low-precision formats. This eases the process of investigating different precision settings and developing new deep learning architectures. More concretely, QPyTorch implements fused kernels for quantization and integrates smoothly with existing PyTorch kernels (e.g. matrix multiplication, convolution).

Recent researches can be reimplemented easily through QPyTorch. We offer an example replication of WAGE in a downstream repo WAGE. We also provide a list of working examples under Examples.

Note: QPyTorch relies on PyTorch functions for the underlying computation, such as matrix multiplication. This means that the actual computation is done in single precision. Therefore, QPyTorch is not intended to be used to study the numerical behavior of different accumulation strategies.

Note: QPyTorch, as of now, have a different rounding mode with PyTorch. QPyTorch does round-away-from-zero while PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor and QPyTorch's simulation of half-precision numbers.

Installation

requirements:

  • Python >= 3.6
  • PyTorch >= 1.5.0
  • GCC >= 4.9 on linux
  • CUDA >= 10.1 on linux

Install other requirements by:

pip install -r requirements.txt

Install QPyTorch through pip:

pip install qtorch

For more details about compiler requirements, please refer to PyTorch extension tutorial.

Documentation

See our readthedocs page.

Tutorials

Examples

  • Low-Precision VGGs and ResNets using fixed point, block floating point on CIFAR and ImageNet. lp_train
  • Reproduction of WAGE in QPyTorch. WAGE
  • Implementation (simulation) of 8-bit Floating Point Training in QPyTorch. IBM8

Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for qtorch, version 0.2.0
Filename, size File type Python version Upload date Hashes
Filename, size qtorch-0.2.0-py3-none-any.whl (22.8 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size qtorch-0.2.0.tar.gz (20.0 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page