Skip to main content

Sparse ML Callback project.

Project description

SparseML Callback

All credits to Sean Narenthiran. I am merely using his code for demonstrating purposes.

SparseML allows you to leverage sparsity to improve inference times substantially.

SparseML requires you to fine-tune your model with the SparseMLCallback + a SparseML Recipe. By training with the SparseMLCallback, you can leverage the DeepSparse engine to exploit the introduced sparsity, resulting in large performance improvements.

The SparseML callback requires the model to be ONNX exportable. This can be tricky when the model requires dynamic sequence lengths such as RNNs.

To use leverage SparseML & DeepSparse follow the below steps:

Choose your Sparse Recipe

To choose a recipe, have a look at recipes and Sparse Zoo.

It may be easier to infer a recipe via the UI dashboard using Sparsify which allows you to tweak and configure a recipe. This requires to import an ONNX model, which you can get from your LightningModule by doing model.to_onnx(output_path).

Train with SparseMLCallback

    from pytorch_lightning import LightningModule, Trainer
    from pl_bolts.callbacks import SparseMLCallback

    class MyModel(LightningModule):
        ...

    model = MyModel()

    trainer = Trainer(
        callbacks=SparseMLCallback(recipe_path='recipe.yaml')
    )

Export to ONNX!

Using the helper function, we handle any quantization/pruning internally and export the model into ONNX format. Note this assumes either you have implemented the property example_input_array in the model or you must provide a sample batch as below.

    import torch

    model = MyModel()
    ...

    # export the onnx model, using the `model.example_input_array`
    SparseMLCallback.export_to_sparse_onnx(model, 'onnx_export/')

    # export the onnx model, providing a sample batch
    SparseMLCallback.export_to_sparse_onnx(model, 'onnx_export/', sample_batch=torch.randn(1, 128, 128, dtype=torch.float32))

Once your model has been exported, you can import this into either Sparsify or DeepSparse.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pl_hub_sparse_ml_callback-0.0.2.tar.gz (8.1 kB view hashes)

Uploaded Source

Built Distribution

pl_hub_sparse_ml_callback-0.0.2-py3-none-any.whl (9.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page