Skip to main content

Python code for simulating low precision floating-point arithmetic

Project description

pychop

!pypi License: MIT Documentation Status

A python package for simulaing low precision floating point arithmetic

Using low precesion can gain extra speedup while resulting in less storage and energy cost. The intention of pychop, following the same function of chop in Matlab provided by Nick higham, is to simulate the low precision formats based on single and double precisions, which is pravalent on modern machine.

The supported rounding modes include:

  1. Round to nearest using round to even last bit to break ties (the default).

  2. Round towards plus infinity (round up).

  3. Round towards minus infinity (round down).

  4. Round towards zero.

  5. Stochastic rounding - round to the next larger or next smaller floating-point number with probability proportional to the distance to those floating-point numbers.

  6. Stochastic rounding - round to the next larger or next smaller floating-point number with equal probability.

Subnormal numbers is supported, they are flushed to zero if it not considered (by setting subnormal to 0).

This package provides consistent APIs to the chop software by Nick higham as much as possible. For the first four rounding mode, with the same user-specific parameters, pychop generates exactly same result as that of the chop software. For stochastic rounding (rmode as 5 and 6), both output same results if random numbers is given the same.

The supported floating point formats

The supported floating point arithmetic formats include:

format description
'q43', 'fp8-e4m3' NVIDIA quarter precision (4 exponent bits, 3 significand (mantissa) bits)
'q52', 'fp8-e5m2' NVIDIA quarter precision (5 exponent bits, 2 significand bits)
'b', 'bfloat16' bfloat16
'h', 'half', 'fp16' IEEE half precision (the default)
's', 'single', 'fp32' IEEE single precision
'd', 'double', 'fp64' IEEE double precision
'c', 'custom' custom format

Install

pychop has the only essential following dependency:

  • numpy >=1.7.3
  • torch (only for torch-chop)

To install the current current release via PIP use:

Numpy backend Torch backend
pip install pychop pip install torch-chop

Contributing

We welcome contributions in any form! Assistance with documentation is always welcome. To contribute, feel free to open an issue or please fork the project make your changes and submit a pull request. We will do our best to work through any issues and requests.

References

[1] Nicholas J. Higham and Srikara Pranesh, Simulating Low Precision Floating-Point Arithmetic, SIAM J. Sci. Comput., 2019.

[2] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019 (revision of IEEE Std 754-2008), IEEE, 2019.

[3] Intel Corporation, BFLOAT16---hardware numerics definition, 2018

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch-chop-0.0.8.tar.gz (15.5 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page