torch implement of WAGEUBN
Project description
wageubn
wageubn's pytorch implementation.
Notice
This repo is based on the same framework as tqt and focuses on inferrence only. Even the quantized error and gradient could be got from wageubn.function.errorquant
and wageubn.function.gradquant
, we will not use them. If they are essential for your training, please fork this repo and wrap the wageubn.function
modules with it.
Now available at https://pypi.org/project/wageubn/0.1.0/.
wageubn's modules
wageubn.function
function
is a re-impletement of torch.nn.modules
. Besides all the args used in the original function, a quantized function get 2 kind of optional arguments: bit_width
and retrain
.
bit_width
has 2 type: weight/bias or activation.
If the retrain
is True
, the Module will be in Retrain Mode, with the log2_t
trainable. Else, in Static Mode, the log2_t
are determined by initialization and not trainable.
wageubn.config
Config the bitwidth via wageubn.config.Config
and wageubn.config.network_config
. wageubn.config.Config
is a namedtuple and you can set bitwidth as its key.
Contributing
It will be great of you to make this project better! There is some ways to contribute!
- To start with, issues and feature request could let maintainers know what's wrong or anything essential to be added.
- If you use the package in you work/repo, just cite the repo and add a dependency note!
- You can add some function in
torch.nn
likeHardTanh
and feel free to open a pull request! The code style is simple as here.
Acknowledgment
The original papar could be find at Arxiv, Training high-performance and large-scale deep neural networks with full 8-bit integers.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.