Skip to main content

Pruning and Quantization of ML models

Project description

alt text

Prune and Quantize ML models

PQuant is a library for training compressed machine learning models, developed at CERN as part of the Next Generation Triggers project.

PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights. Both PyTorch and TensorFlow models are supported.

Layers that can be compressed: Conv2D and Linear layers, Tanh and ReLU activations for both TensorFlow and PyTorch. For PyTorch, also Conv1D.

alt text

The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.

Example

Example notebook can be found here. It handles the

  1. Creation of a torch model and data loaders.
  2. Creation of the training and validation functions.
  3. Loading a default pruning configuration of a pruning method.
  4. Using the configuration, the model, and the training and validation functions, call the training function of PQuant to train and compress the model.
  5. Creating a custom quantization and pruning configuration for a given model (disable pruning for some layers, different quantization bitwidths for different layers).

Pruning methods

A description of the pruning methods and their hyperparameters can be found here.

Quantization parameters

A description of the quantization parameters can be found here.

Installation

pip install . for regular install, pip install -e . if you wish to install as a local editable package To run the code, HGQ2 is also needed. For now it only has local install available, so download the repository and install it locally.

Authors

  • Roope Niemi (CERN)
  • Anastasiia Petrovych (CERN)
  • Chang Sun (Caltech)
  • Michael Kagan (SLAC National Accelerator Laboratory)
  • Vladimir Loncar (CERN)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pquant_ml-0.0.1.tar.gz (492.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pquant_ml-0.0.1-py3-none-any.whl (60.0 kB view details)

Uploaded Python 3

File details

Details for the file pquant_ml-0.0.1.tar.gz.

File metadata

  • Download URL: pquant_ml-0.0.1.tar.gz
  • Upload date:
  • Size: 492.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pquant_ml-0.0.1.tar.gz
Algorithm Hash digest
SHA256 501f5ed00429b0e0fce93e5cac0d1129bbf180a12a8ba45b5902fd3a936f5a2c
MD5 a1aa4b6800b5353919c9a9474c7dac59
BLAKE2b-256 7895044555083e2b5fa05614feb490911b6214e2cb0a144a1df5f2574a772e1b

See more details on using hashes here.

Provenance

The following attestation bundles were made for pquant_ml-0.0.1.tar.gz:

Publisher: python-publish.yml on nroope/PQuant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pquant_ml-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: pquant_ml-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 60.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pquant_ml-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3b691a3a67a524dce647489a6e91cb962068c80c823158ee9c2335226c48e7cf
MD5 8e68f74f163c1e61966e51130a0741ac
BLAKE2b-256 0477b7969b7bf6c5f9e3c5f34b68105068b2460bb277c2b5729e37321ebc0c94

See more details on using hashes here.

Provenance

The following attestation bundles were made for pquant_ml-0.0.1-py3-none-any.whl:

Publisher: python-publish.yml on nroope/PQuant

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page