qnq's not quantization
Project description
QNQ -- QNQ's not quantization
version 1.1.0 2021.2.5
Description
The toolkit is for Techart algorithm team to quantize their custom neural network's pretrained model. The toolkit is beta now, you can contact me with email(dongz.cn@outlook.com) for adding ops and fixing bugs.
How to install
pip install qnq
How to use
This README.MD is in very early stages, and will be updated soon. you can visit https://git.zwdong.com/zhiwei.dong/qnq_tutorial for more examples for QNQ.
-
Prepare your model.
- Check if your model contains non-class operator, like torch.matmul.
- If
True
, addfrom qnq.operators.torchfunc_ops import *
to your code. - Then use class replace non-class operator, you can refer fellow
#! add by dongz
class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = nn.BatchNorm2d(planes) self.relu1 = nn.ReLU(inplace=True) self.relu2 = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride #! add by dongz self.torch_add = TorchAdd() def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu1(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) #! add by dongz out = self.torch_add(out, identity) # out += identity out = self.relu2(out) return out
-
Prepare 'metrics', 'metrics_light'(optional) and 'steper'.
- Choose at least 1k data to calibration your quantized model.
- 'metrics' inference without input params, return metrics value(a float number).
- 'metrics_light' inference without input params, return metrics value(a float number), you can choose 1/10 testsets to test.
- 'steper' done inference and without input params too, but add quant.step(), and no return.
- Check qnq_tutorial for details.
-
Prepare pretrained checkpoints.
- Train your model and use
torch.save()
to save your checkpoints. - Use
checkpoints = torch.load(checkpoints_path)
andmodel.load_state_dict(checkpoints)
to load your checkpoints.
- Train your model and use
-
Quantize
- For code
- Add
from qnq import QNQ
- Add
quant = QNQ(model, save_path, config_path, metrics, metrics_light, steper)
. - Add
quant.search()
- Add
- First run the program will exit, but the config_path will show a yaml file.
- Edit config.yaml and rerun for quantization.
- For code
Operators supported
- Convolution Layers
- Conv
- ConvTranspose
- Pooling Layers
- MaxPool
- AveragePool
- AdaptiveAvgPool
- Activation
- Relu、Relu6
- PRelu、LeakyRelu
- LogSoftmax
- Normalization Layers
- BatchNorm
- LayerNorm
- Recurrent
- LSTM
- Linear Layers
- Linear
- Vision Layers
- Upsample
- Embedding
- Torch Function
- Add, Sum, Minus, DotMul, MatMul, Div,
- Sqrt, Exp
- Sin, Cos
- SoftMax, Sigmoid, Tanh
- TorchTemplate, TorchDummy
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
qnq-1.1.2.tar.gz
(39.9 kB
view hashes)
Built Distribution
qnq-1.1.2-py2.py3-none-any.whl
(46.7 kB
view hashes)