A Model Compression Toolkit for neural networks
Project description
Model Compression Toolkit (MCT)
Model Compression Toolkit (MCT) is an open-source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers tools for optimizing and deploying state-of-the-art neural networks on efficient hardware. Specifically, this project aims to apply quantization and pruning schemes to compress neural networks.
Currently, this project supports hardware-friendly post-training quantization (HPTQ) with Tensorflow 2 and Pytorch [1].
The MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.
For more information, please visit our project website.
Table of Contents
Getting Started
This section provides a quick starting guide. We begin with installation via source code or pip server. Then, we provide a short usage example.
Installation
See the MCT install guide for the pip package, and build from the source.
From Source
git clone https://github.com/sony/model_optimization.git
python setup.py install
From PyPi - latest stable release
pip install model-compression-toolkit
A nightly package is also available (unstable):
pip install mct-nightly
To run MCT, one of the supported frameworks, Tenosflow/Pytorch, needs to be installed.
For using with Tensorflow please install the packages: tensorflow, tensorflow-model-optimization
For using with Pytorch (experimental) please install the packages: torch
MCT is tested with:
- Tensorflow version 2.7
- Pytorch version 1.10.0
Usage Example
For an example of how to use the post-training quantization, using Keras, please use this link.
For an example using Pytorch (experimental), please use this link.
For more examples please see the tutorials' directory.
Supported Features
Quantization:
- Post Training Quantization for Keras models.
- Post Training Quantization for Pytorch models (experimental).
- Gradient-based post-training (Experimental, Keras only).
- Mixed-precision post-training quantization (Experimental).
Tensorboard Visualization (Experimental):
- CS Analyzer: compare a model compressed with the original model to analyze large accuracy drops.
- Activation statistics and errors
Results
Keras
As part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.
- Image Classification Example with MobileNet V1 on ImageNet dataset
Network Name | Float Accuracy | 8Bit Accuracy | Comments |
---|---|---|---|
MobileNetV1 [2] | 70.558 | 70.418 |
For more results please see [1]
Pytorch
We quantized classification networks from the torchvision library. In the following table we present the ImageNet validation results for these models:
Network Name | Float Accuracy | 8Bit Accuracy |
---|---|---|
MobileNet V2 [3] | 71.886 | 71.444 |
ResNet-18 [3] | 69.86 | 69.63 |
SqueezeNet 1.1 [3] | 58.128 | 57.678 |
Contributions
MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.
*You will find more information about contributions in the Contribution guide.
License
References
[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint.
[2] MobilNet from Keras applications.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mct-nightly-1.5.0.4082022.post413.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6aedacf3714377e6dd4bdce461799a6752e75ff00a463a6bba68b205a34d824f |
|
MD5 | 4ee34b2925a4c1368507c93e94645eab |
|
BLAKE2b-256 | 20d3e69211042b37a6bf3530a9ff5784e8ded3c4e3c714f6016631a3d7e9a190 |
Hashes for mct_nightly-1.5.0.4082022.post413-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b735542870e4eb1faa8640c56101f962ffae114f13301c5378273f276b310ff |
|
MD5 | a63729424cf1821eb3bfc1b6c484f579 |
|
BLAKE2b-256 | c317c086907f2cddb1f857b08579d85c7f492141ddc3ab3d6dabd3ed46068197 |