Skip to main content

Low-Precision Arithmetic Simulation in Pytorch - Extension for Posit

Project description

Extended version of QPytorch

Author: minhhn2910@github, himeshi@github

Install in developer mode:

git clone https://github.com/minhhn2910/QPyTorch.git
cd QPyTorch
pip install -e ./

simple test if c-extension is working correctly :

python test.py

Important: if there are errors when running test.py, please export the environment variables indicating build directory and/or CUDA_HOME, otherwise we will have permission problem in multi-user-server.

export TORCH_EXTENSIONS_DIR=/[your-home-folder]/torch_extension
export CUDA_HOME=/[your cuda instalation directory e.g. /usr/local/cuda-10.2] 
python test.py

Functionality:

Currently under development

The below is the original README file

QPyTorch

Downloads License: MIT

News:

  • Updated to version 0.2.0:
    • Bug fixed: previously in our floating point quantization, numbers that are closer to 0 than the smallest representable positive number are rounded to the smallest rep positive number. Now we round to 0 or the smallest representable number based on which one is the nearest.
    • Different Behavior: To be consistent with PyTorch Issue #17443, we round to nearest even now.
    • We migrate to PyTorch 1.5.0. There are several changes in the C++ API of PyTorch. This new version is not backward-compatible with older PyTorch.
    • Note: if you are using CUDA 10.1, please install CUDA 10.1 Update 1 (or later version). There is a bug in the first version of CUDA 10.1 which leads to compilation errors.
    • Note: previous users, please remove the cache in the pytorch extension directory. For example, you can run this command rm -rf /tmp/torch_extensions/quant_cuda /tmp/torch_extensions/quant_cpu if you are using the default directory for pytorch extensions.

Overview

QPyTorch is a low-precision arithmetic simulation package in PyTorch. It is designed to support researches on low-precision machine learning, especially for researches in low-precision training. A more comprehensive write-up can be found here.

Notably, QPyTorch supports quantizing different numbers in the training process with customized low-precision formats. This eases the process of investigating different precision settings and developing new deep learning architectures. More concretely, QPyTorch implements fused kernels for quantization and integrates smoothly with existing PyTorch kernels (e.g. matrix multiplication, convolution).

Recent researches can be reimplemented easily through QPyTorch. We offer an example replication of WAGE in a downstream repo WAGE. We also provide a list of working examples under Examples.

Note: QPyTorch relies on PyTorch functions for the underlying computation, such as matrix multiplication. This means that the actual computation is done in single precision. Therefore, QPyTorch is not intended to be used to study the numerical behavior of different accumulation strategies.

Note: QPyTorch, as of now, have a different rounding mode with PyTorch. QPyTorch does round-away-from-zero while PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor and QPyTorch's simulation of half-precision numbers.

if you find this repo useful please cite

@misc{zhang2019qpytorch,
    title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
    author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
    year={2019},
    eprint={1910.04540},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Installation

requirements:

  • Python >= 3.6
  • PyTorch >= 1.5.0
  • GCC >= 4.9 on linux
  • CUDA >= 10.1 on linux

Install other requirements by:

pip install -r requirements.txt

Install QPyTorch through pip:

pip install qtorch

For more details about compiler requirements, please refer to PyTorch extension tutorial.

Documentation

See our readthedocs page.

Tutorials

Examples

  • Low-Precision VGGs and ResNets using fixed point, block floating point on CIFAR and ImageNet. lp_train
  • Reproduction of WAGE in QPyTorch. WAGE
  • Implementation (simulation) of 8-bit Floating Point Training in QPyTorch. IBM8

Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qtorch_posit-0.1.1.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

qtorch_posit-0.1.1-py3-none-any.whl (33.8 kB view details)

Uploaded Python 3

File details

Details for the file qtorch_posit-0.1.1.tar.gz.

File metadata

  • Download URL: qtorch_posit-0.1.1.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.7.9

File hashes

Hashes for qtorch_posit-0.1.1.tar.gz
Algorithm Hash digest
SHA256 392dd98f57238d9a6b99e28dd8e67b287aa6c99e5ff5d6d7b2c411c7be8e03af
MD5 7366cc98aba760d68cab9aa69bb8b125
BLAKE2b-256 09e759c2a7a933b82b7ffcdef3da7a718f4909d18ba4d7622ed25880f6c762d9

See more details on using hashes here.

File details

Details for the file qtorch_posit-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: qtorch_posit-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 33.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.7.9

File hashes

Hashes for qtorch_posit-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 82dcdab04ed8290af2e1d9f1d79372c389043790b0b29e44a11cfef23f00d01d
MD5 c863296cacfb56261b75a0a9310ef35e
BLAKE2b-256 6ed82796469f8bd5e232382bef94baf05d04cc12bf6b2b312ca4594bb0f8b335

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page