Skip to main content

A Model Compression Toolkit for neural networks

Project description

Model Compression Toolkit (MCT)

tests

Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware.
This project provides researchers, developers and engineers tools for optimizing and deploying state-of-the-art neural network on efficient hardware.
Specifically, this project applies constrained quantization and pruning schemes on a neural network.

Currently, this project only supports hardware friendly post training quantization (HPTQ) with Tensorflow 2 [1].

The MCT project is developed by researchers and engineers working at Sony Semiconductors Israel.

For more information, please visit our project website.

Table of Contents

Getting Started

This section provides a quick starting guide. We begin with installtion via source code or pip server. Then, we provide a short usage example.

Installation

See the MCT install guide for the pip package, and build from source.

From Source

git clone https://github.com/sony/model_optimization.git
python setup.py install

From PyPi - latest stable release

pip install model-compression-toolkit

A nightly package is also available (unstable):

pip install mct-nightly

Usage Example

An example of how to use the post training quantization, using Keras, is shown in the following code snapshot.

import model_compression_toolkit as mct

# Set the batch size of the images at each calibration iteration.
batch_size = 50

# Create a representative data generator, which returns a list of images.
# Load a folder of images. 
folder = '/path/to/images/folder'

# The images can be preprocessed using a list of preprocessing functions.
def normalization(x):
    return (x - 127.5) / 127.5

# Create a FolderImageLoader instance which loads the images, preprocess them and enables you to sample batches of them.
image_data_loader = mct.FolderImageLoader(folder,
                                          preprocessing=[normalization],
                                          batch_size=batch_size)

# Create a Callable representative dataset for calibration purposes.
# The function should be called without any arguments, and should return a list numpy arrays (array for each model's input).
# For example: if the model has two input tensors - one with input shape of 32X32X3 and the second with input 
# shape of 224X224X3, and we calibrate the model using batches of 20 images,
# calling representative_data_gen() should return a list 
# of two numpy.ndarray objects where the arrays' shapes are [(20, 32, 32, 3), (20, 224, 224, 3)].
def representative_data_gen() -> list:
        return [image_data_loader.sample()]


# Create a model and quantize it using the representative_data_gen as the calibration images.
# Set the number of calibration iterations to 10.
quantized_model, quantization_info = mct.keras_post_training_quantization(model,
                                                                          representative_data_gen,
                                                                          n_iter=10)

For more examples please see the tutorials' directory.

Supported Features

Quantization:

* Post Training Quantization 
* Gradient based post training (Experimental) 

Tensorboard Visualization (Experimental):

* CS Analyizer: compare a model compressed with the orignal model to analyze large accuracy drops.
* Activation statisicis and errors

Note that currently we only have full support for Keras layers. Using the TensorFlow native layers may lead to unexpected behavior. This limitation will be removed in future releases.

MCT is tested with Tensorflow Version 2.5.

Tutorials and Results

As part of the MCT library, we have a set of example networks on image classification. These networks can be used as examples when using the package.

  • Image Classification Example with MobileNet V1 on ImageNet dataset
Network Name Float Accuracy 8Bit Accuracy Comments
MobileNetV1 [2] 70.558 70.418

For more results please see [1]

Contributions

MCT aims at keeping a more up-to-date fork and welcomes contributions from anyone.

*You will find more information about contributions in the Contribution guide.

License

Apache License 2.0.

Refernce

[1] Habi, H.V., Peretz, R., Cohen, E., Dikstein, L., Dror, O., Diamant, I., Jennings, R.H. and Netzer, A., 2021. HPTQ: Hardware-Friendly Post Training Quantization. arXiv preprint.

[2] MobilNet from Keras applications.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mct-nightly-1.1.0.12122021.post2508.tar.gz (122.3 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file mct-nightly-1.1.0.12122021.post2508.tar.gz.

File metadata

  • Download URL: mct-nightly-1.1.0.12122021.post2508.tar.gz
  • Upload date:
  • Size: 122.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.12

File hashes

Hashes for mct-nightly-1.1.0.12122021.post2508.tar.gz
Algorithm Hash digest
SHA256 f061141472d659147b8396872ab7e92ecff5d597abb7dc43b4bfb62fe28f4e56
MD5 6cf50c15315a488c4853df1a0a1baee5
BLAKE2b-256 ad1e0fd626df6a887b5453cdab7193ee8068c98500846d11ec50724e238d1ba7

See more details on using hashes here.

File details

Details for the file mct_nightly-1.1.0.12122021.post2508-py3-none-any.whl.

File metadata

  • Download URL: mct_nightly-1.1.0.12122021.post2508-py3-none-any.whl
  • Upload date:
  • Size: 234.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.12

File hashes

Hashes for mct_nightly-1.1.0.12122021.post2508-py3-none-any.whl
Algorithm Hash digest
SHA256 da292c8e85c4ee51d2ab4938d654d3c1059ba0d4f625ed480bdf5b92d8a74d8c
MD5 6cb90f5e94c5dac6ef38f76c0bcb394e
BLAKE2b-256 0ec01c642af5ee71c7b05a85b52a9a39fc5ff360323f1410b3244c2d467b1caa

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page