A Model Compression Toolkit for neural networks
Project description
Model Compression Toolkit (MCT)
Model Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware. Specifically, this project aims to apply quantization and pruning schemes to compress neural networks.
Currently, this project supports hardware-friendly post-training quantization (HPTQ) with Tensorflow 2 and Pytorch [1].
The MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.
For more information, please visit our project website.
Table of Contents
Getting Started
This section provides a quick starting guide. We begin with installation via source code or pip server. Then, we provide a short usage example.
Installation
See the MCT install guide for the pip package, and build from the source.
From Source
git clone https://github.com/sony/model_optimization.git
python setup.py install
From PyPi - latest stable release
pip install model-compression-toolkit
A nightly package is also available (unstable):
pip install mct-nightly
To run MCT, one of the supported frameworks, Tenosflow/Pytorch, needs to be installed.
For using with Tensorflow please install the packages: tensorflow, tensorflow-model-optimization
For using with Pytorch (experimental) please install the packages: torch
MCT is tested with:
- Tensorflow version 2.7
- Pytorch version 1.10.0
Usage Example
For an example of how to use the post-training quantization, using Keras, please use this link.
For an example using Pytorch (experimental), please use this link.
For more examples please see the tutorials' directory.
Supported Features
Quantization:
- Post Training Quantization for Keras models.
- Post Training Quantization for Pytorch models (experimental).
- Gradient-based post-training (Experimental, Keras only).
- Mixed-precision post-training quantization (Experimental).
Tensorboard Visualization (Experimental):
- CS Analyzer: compare a model compressed with the original model to analyze large accuracy drops.
- Activation statistics and errors
Results
Keras
As part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.
- Image Classification Example with MobileNet V1 on ImageNet dataset
Network Name | Float Accuracy | 8Bit Accuracy | Comments |
---|---|---|---|
MobileNetV1 [2] | 70.558 | 70.418 |
For more results please see [1]
Pytorch
We quantized classification networks from the torchvision library. In the following table we present the ImageNet validation results for these models:
Network Name | Float Accuracy | 8Bit Accuracy |
---|---|---|
MobileNet V2 [3] | 71.886 | 71.444 |
ResNet-18 [3] | 69.86 | 69.63 |
SqueezeNet 1.1 [3] | 58.128 | 57.678 |
Contributions
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
*You will find more information about contributions in the Contribution guide.
License
References
[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint.
[2] MobilNet from Keras applications.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mct-nightly-1.4.0.20062022.post602.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2bfc45a091f1659550c9a1c9ea87f856bf26e8c5c942bda459416e4d9a455224 |
|
MD5 | 21e816c4d588105db0d20e34d83b3ea7 |
|
BLAKE2b-256 | 22e7e74b625bac76fbc7aba281ef3563f7312b9f5b766c6a707861f1c565c53d |
Hashes for mct_nightly-1.4.0.20062022.post602-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4c35c8888ee8e8496ccfcbe3797df08a46633f7dac11c5125ab3aadb94ac57a2 |
|
MD5 | b224d455fe706569b1fc2a74231ccf5e |
|
BLAKE2b-256 | ebb0ba4c7c635e378e193c7ba17dae5ae6f23244527301d9aa15597f44faf750 |