Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

Condensa Programmable Model Compression Framework

Project description

A Programming System for Model Compression

Condensa is a framework for programmable model compression in Python. It comes with a set of built-in compression operators which may be used to compose complex compression schemes targeting specific combinations of DNN architecture, hardware platform, and optimization objective. To recover any accuracy lost during compression, Condensa uses a constrained optimization formulation of model compression and employs an Augmented Lagrangian-based algorithm as the optimizer.

Status: Condensa is under active development, and bug reports, pull requests, and other feedback are all highly appreciated. See the contributions section below for more details on how to contribute.

Supported Operators and Schemes

Condensa provides the following set of pre-built compression schemes:

The schemes above are built using one or more compression operators, which may be combined in various ways to define your own custom schemes.

Please refer to the documentation for a detailed description of available operators and schemes.

Prerequisites

Condensa requires:

  • A working Linux installation (we use Ubuntu 18.04)
  • NVIDIA drivers and CUDA 10+ for GPU support
  • Python 3.5 or newer
  • PyTorch 1.0 or newer

Installation

The most straightforward way of installing Condensa is via pip:

pip install condensa

Installation from Source

Retrieve the latest source code from the Condensa repository:

git clone https://github.com/NVlabs/condensa.git

Navigate to the source code directory and run the following:

pip install -e .

Test out the Installation

To check the installation, run the unit test suite:

bash run_all_tests.sh -v

Getting Started

The AlexNet Notebook contains a simple step-by-step walkthrough of compressing a pre-trained model using Condensa. Check out the examples folder for additional, more complex examples of using Condensa (note: some examples require the torchvision package to be installed).

Documentation

Documentation is available here. Please also check out the Condensa paper for a detailed description of Condensa's motivation, features, and performance results.

Contributing

We appreciate all contributions, including bug fixes, new features and documentation, and additional tutorials. You can initiate contributions via Github pull requests. When making code contributions, please follow the PEP 8 Python coding standard and provide unit tests for the new features. Finally, make sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Citing Condensa

If you use Condensa for research, please consider citing the following paper:

@article{condensa2019,
    title = {A Programmable Approach to Model Compression},
    author = {Joseph, Vinu and Muralidharan, Saurav and Garg, Animesh and Garland, Michael and Gopalakrishnan, Ganesh},
    journal = {CoRR},
    volume = {1911.02497},
    year = {2019},
    url = {https://arxiv.org/abs/1911.02497}
}

Disclaimer

Condensa is a research prototype and not an official NVIDIA product. Many features are still experimental and yet to be properly documented.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for condensa, version 0.5.0b0
Filename, size File type Python version Upload date Hashes
Filename, size condensa-0.5.0b0-py3-none-any.whl (36.0 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size condensa-0.5.0b0.tar.gz (21.1 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page