Skip to main content

qnq's not quantization

Project description

QNQ -- QNQ's not quantization

Description

This toolkit is for Techart algorithm team to quantize their custom neural network's pretrained model. The toolkit is beta now, you can contact me with email(dongzhiwei2021@outlook.com) for adding ops and fixing bugs.

How to install

pip install qnq

How to quantize

  1. Prepare your model.

    1. Check if your model contain non-class operator, like torch.matmul.
    2. If True, add from qnq.operators.torchfunc_ops import * to your code.
    3. Then use class replace non-class operator, you can refer fellow #! add by dongz
    class BasicBlock(nn.Module):
        expansion = 1
    
        def __init__(self, inplanes, planes, stride=1, downsample=None):
            super(BasicBlock, self).__init__()
            self.conv1 = conv3x3(inplanes, planes, stride)
            self.bn1 = nn.BatchNorm2d(planes)
            self.relu1 = nn.ReLU(inplace=True)
            self.relu2 = nn.ReLU(inplace=True)
            self.conv2 = conv3x3(planes, planes)
            self.bn2 = nn.BatchNorm2d(planes)
            self.downsample = downsample
            self.stride = stride
    
            #! add by dongz
            self.torch_add = TorchAdd()
    
        def forward(self, x):
            identity = x
    
            out = self.conv1(x)
            out = self.bn1(out)
            out = self.relu1(out)
    
            out = self.conv2(out)
            out = self.bn2(out)
    
            if self.downsample is not None:
                identity = self.downsample(x)
    
            #! add by dongz
            out = self.torch_add(out, identity)
            # out += identity
            out = self.relu2(out)
    
            return out
    
  2. Prepare your loader.

    1. Your loader.__getitem__() should return a tuple like (data, label) or (data, index), qnq will use loader.__getitem__()[0] to forward your model.
  3. Prepare pretrained checkpoints.

    1. Train your model and use torch.save() to save your checkpoints.
    2. Use checkpoints = torch.load(checkpoints_path) and model.load_state_dict(checkpoints) to load your checkpoints.
  4. Quantize

    1. Add from qnq import quantize
    2. Call quantize(model, bit_width, data_loader, path).

How to eval with quantization

  1. In program
    1. quantize() will turn on eval mode for model, that will automatic quantize activation, and weight already be fixed-point right now.
    2. Just call your origin version eval()
  2. Eval quantize.pth
    1. Coming soon!

How to debug

  1. Call quantize(model, bit_width, data_loader, path, is_debug=True).
  2. Debug mode will plot every layer's stats.

How QNQ work

Coming soon!

Operators supported

  • Convolution Layers
    • Conv
  • Pooling Layers
    • AveragePool
    • AdaptiveAvgPool
  • Activation
    • Relu
  • Normalization Layers
    • BatchNorm
  • Linear Layers
    • Linear
  • Torch Function
    • Add, Minus, DotMul, MatMul, Div
    • Sin, Cos
    • SoftMax

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qnq-0.1.3.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

qnq-0.1.3-py3-none-any.whl (29.2 kB view details)

Uploaded Python 3

File details

Details for the file qnq-0.1.3.tar.gz.

File metadata

  • Download URL: qnq-0.1.3.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.6.0 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.6.9

File hashes

Hashes for qnq-0.1.3.tar.gz
Algorithm Hash digest
SHA256 3de553f20b10107d0325ddab7066545087f8c1b84657f8ce5625f02e57565b36
MD5 238318c3cb3b9e3a0aa29ac650f49e67
BLAKE2b-256 f3654fea9b9256ce1728cd69ee5c71f21465285e66328d32786bbbe0a2617af5

See more details on using hashes here.

File details

Details for the file qnq-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: qnq-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 29.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.6.0 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.6.9

File hashes

Hashes for qnq-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7cd159ab10d089f2c359be48a0d25e49d99783f02b846ccb1dd5530c6459dfca
MD5 bd78ad43b04986de0899b0406acb6164
BLAKE2b-256 9215b104801f73ec192b597c0e1e2f66aa665f9317f2e72ad1c2b2c59ac78e3d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page