Skip to main content

Pruning and Quantization of ML models

Project description

alt text

Prune and Quantize ML models

PQuant is a library for training compressed machine learning models, developed at CERN as part of the Next Generation Triggers project.

Installation via pip: pip install pquant-ml.

With TensorFlow pip install pquant-ml[tensorflow].

With PyTorch pip install pquant-ml[torch].

PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights. Both PyTorch and TensorFlow models are supported.

Layers that can be compressed

  • PQConv*D: Convolutional layers
  • PQAvgPool*D: Average pooling layers
  • PQBatchNorm*D: BatchNorm layers
  • PQDense: Linear layer
  • PQActivation: Activation layers (ReLU, Tanh)

The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.

alt text

Example

Example notebook can be found here. It handles the

  1. Creation of a torch model and data loaders.
  2. Creation of the training and validation functions.
  3. Loading a default pruning configuration of a pruning method.
  4. Using the configuration, the model, and the training and validation functions, call the training function of PQuant to train and compress the model.
  5. Creating a custom quantization and pruning configuration for a given model (disable pruning for some layers, different quantization bitwidths for different layers).
  6. Direct layers usage and layers replacement approaches.
  7. Usage of fine-tuning platform.

Pruning methods

A description of the pruning methods and their hyperparameters can be found here.

Quantization parameters

A description of the quantization parameters can be found here.

For detailed documentation check this page: PQuantML documentation

Authors

  • Roope Niemi (CERN)
  • Anastasiia Petrovych (CERN)
  • Arghya Das (Purdue University)
  • Enrico Lupi (CERN)
  • Chang Sun (Caltech)
  • Dimitrios Danopoulos (CERN)
  • Marlon Joshua Helbing
  • Mia Liu (Purdue University)
  • Michael Kagan (SLAC National Accelerator Laboratory)
  • Vladimir Loncar (CERN)
  • Maurizio Pierini (CERN)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pquant_ml-0.0.4.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pquant_ml-0.0.4-py3-none-any.whl (88.0 kB view details)

Uploaded Python 3

File details

Details for the file pquant_ml-0.0.4.tar.gz.

File metadata

  • Download URL: pquant_ml-0.0.4.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pquant_ml-0.0.4.tar.gz
Algorithm Hash digest
SHA256 e4205837d80660e1edab4fc33043fa0e3a8103ca18d9a805eaf37e3ce1ffd5d7
MD5 425665500e14e1ae80b80fa827551757
BLAKE2b-256 b1f221fe50072788da53d593080bdd5f82ad2fdbb1f74641a4359cfab47007fe

See more details on using hashes here.

Provenance

The following attestation bundles were made for pquant_ml-0.0.4.tar.gz:

Publisher: python-publish.yml on nroope/PQuant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pquant_ml-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: pquant_ml-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 88.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pquant_ml-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 9c45f8670a4bbeea0e36b2635a736d134c03a8f3d99d37aed07d96672773d81f
MD5 3eeec25602f42d51d14dd08878da0f3d
BLAKE2b-256 bf29244612676da51ffc6cb03024649fff8f051e77155ac38750bdd0999ed8ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for pquant_ml-0.0.4-py3-none-any.whl:

Publisher: python-publish.yml on nroope/PQuant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page