Skip to main content

Low-Precision Arithmetic Simulation in Pytorch - Extension for Posit and customized number formats

Project description

QPytorch+: Extending Qpytorch for Posit format and more

Author: minhhn2910@github, himeshi@github


Install

Install in developer mode:

git clone https://github.com/minhhn2910/QPyTorch.git
cd QPyTorch
pip install -e ./

Simple test if c-extension is working correctly :

python test.py

Important: if there are errors when running test.py, please export the environment variables indicating build directory and/or CUDA_HOME, otherwise we will have permission problem in multi-user-server.

export TORCH_EXTENSIONS_DIR=/[your-home-folder]/torch_extension
export CUDA_HOME=/[your cuda instalation directory e.g. /usr/local/cuda-10.2] 
python test.py

Functionality:

  • Support Posit Format with round to nearest mode.
  • Scaling of value before & after conversion to/from posit is supported (Exponent bias when the scale is a power of 2).
    For example: value x -> x*scale -> Posit(x*scale) -> x
  • Support Tanh approximation with Posit and correction of error:
    When x is in a posit format with es = 0 => Sigmoid(x) = (x XOR 0x8000) >> 2 => PositTanh(x) = 2 · Sigmoid(2x) − 1
  • More number formats (Table lookup, log2 system ..., and new rounding modes will be supported on new versions).

Currently under development and update to support more number formats and schemes.


Demo and tutorial:

  • Approximate Tanh Function with Posit is presented at examples/tutorial/test_posit_func.ipynb
  • Most functionalities can be tested by using the notebooks in posit tutorials: ./examples/tutorial/
  • Notebook demo training Cifar10 with vanilla Posit 8 bit: examples/tutorial/CIFAR10_Posit_Training_Example.ipynb
  • Demo of DCGAN Cifar10 training with Posit 8 bit: Google Colab Link
  • Demo of DCGAN Lsun inference using Posit 6 bit and Approximate Tanh : Google Colab Link
  • Demo of applying posit 6 bits & 8 bits to ALBERT for Question Answering Task: GoogleColab Demo

If you find this repo useful, please cite our paper(s) listed below. The below also explain the terms and usage of Posit's enhancements (exponent bias and tanh function).

@inproceedings{ho2021posit,
  title={Posit Arithmetic for the Training and Deployment of Generative Adversarial Networks},
  author={Ho, Nhut-Minh and Nguyen, Duy-Thanh and De Silva, Himeshi and Gustafson, John L and Wong, Weng-Fai and Chang, Ik Joon},
  booktitle={2021 Design, Automation \& Test in Europe Conference \& Exhibition (DATE)},
  pages={1350--1355},
  year={2021},
  organization={IEEE}
}


The original Qpytorch package which supports floating point and fixed point:

The original README file is in REAME.original.md

Credit to the Qpytorch team and their original publication

@misc{zhang2019qpytorch,
    title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
    author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
    year={2019},
    eprint={1910.04540},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Qpytorch Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qtorch_plus-0.1.0.tar.gz (30.3 kB view hashes)

Uploaded Source

Built Distribution

qtorch_plus-0.1.0-py3-none-any.whl (34.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page