A Model Compression Toolkit for neural networks
Project description
Model Compression Toolkit (MCT)
Model Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware. Specifically, this project aims to apply quantization and pruning schemes to compress neural networks.
Currently, this project supports hardware-friendly post-training quantization (HPTQ) with Tensorflow 2 and Pytorch [1].
The MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.
For more information, please visit our project website.
Table of Contents
Getting Started
This section provides a quick starting guide. We begin with installation via source code or pip server. Then, we provide a short usage example.
Installation
See the MCT install guide for the pip package, and build from the source.
From Source
git clone https://github.com/sony/model_optimization.git
python setup.py install
From PyPi - latest stable release
pip install model-compression-toolkit
A nightly package is also available (unstable):
pip install mct-nightly
To run MCT, one of the supported frameworks, Tenosflow/Pytorch, needs to be installed.
For using with Tensorflow please install the packages: tensorflow, tensorflow-model-optimization
For using with Pytorch (experimental) please install the packages: torch
MCT is tested with:
- Tensorflow version 2.7
- Pytorch version 1.10.0
Usage Example
For an example of how to use the post-training quantization, using Keras, please use this link.
For an example using Pytorch (experimental), please use this link.
For more examples please see the tutorials' directory.
Supported Features
Quantization:
- Post Training Quantization for Keras models.
- Post Training Quantization for Pytorch models (experimental).
- Gradient-based post-training (Experimental, Keras only).
- Mixed-precision post-training quantization (Experimental).
Tensorboard Visualization (Experimental):
- CS Analyzer: compare a model compressed with the original model to analyze large accuracy drops.
- Activation statistics and errors
Results
Keras
As part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.
- Image Classification Example with MobileNet V1 on ImageNet dataset
Network Name | Float Accuracy | 8Bit Accuracy | Comments |
---|---|---|---|
MobileNetV1 [2] | 70.558 | 70.418 |
For more results please see [1]
Pytorch
We quantized classification networks from the torchvision library. In the following table we present the ImageNet validation results for these models:
Network Name | Float Accuracy | 8Bit Accuracy |
---|---|---|
MobileNet V2 [3] | 71.886 | 71.444 |
ResNet-18 [3] | 69.86 | 69.63 |
SqueezeNet 1.1 [3] | 58.128 | 57.678 |
Contributions
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
*You will find more information about contributions in the Contribution guide.
License
References
[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint.
[2] MobilNet from Keras applications.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mct-nightly-1.5.0.20072022.post419.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f9a851b855e5e879e45f88d0f11931d01a7302f1914ce7e4bef266538b13ec29 |
|
MD5 | 885f45eaea561b2eb527d4220a4a8d8b |
|
BLAKE2b-256 | 7421ab1d59531e09a9353357553b03cbc52bcd6e6e1f7a9acc0b0a63a0e3ad72 |
Hashes for mct_nightly-1.5.0.20072022.post419-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 84b68cb4906ae547675eec2121ebd7fe67625628ce135f28a5d40333fc3ed934 |
|
MD5 | 9b86ac9fba50e88ada595401c81ea322 |
|
BLAKE2b-256 | 0c1ed60ee23182ee360cd961cc168e051a90c58e01fd709d6ce851623e3bf5be |