Skip to main content

Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs

Project description

tool icon  DeepSparse

Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs

Documentation Main GitHub release Contributor Covenant

Overview

The DeepSparse Engine is a CPU runtime that delivers GPU-class performance by taking advantage of sparsity (read more about sparsification here) within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.

The GitHub repository includes package APIs along with examples to quickly get started benchmarking and inferencing sparse models.

Highlights

ResNet-50, b64 - ORT: 296 images/sec vs DeepSparse: 2305 images/sec on 24 cores YOLOv3, b64 - PyTorch: 6.9 images/sec vs. DeepSparse: 46.5 images/sec

Tutorials

Installation

This repository is tested on Python 3.6+, and ONNX 1.5.0+. It is recommended to install in a virtual environment to keep your system in order.

Install with pip using:

pip install deepsparse

Hardware Support

The DeepSparse Engine is validated to work on x86 Intel and AMD CPUs running Linux operating systems. Mac and Windows require running Linux in a Docker or virtual machine.

It is highly recommended to run on a CPU with AVX-512 instructions available for optimal algorithms to be enabled.

Here is a table detailing specific support for some algorithms over different microarchitectures:

x86 Extension Microarchitectures Activation Sparsity Kernel Sparsity Sparse Quantization
AMD AVX2 Zen 2, Zen 3 not supported optimized not supported
Intel AVX2 Haswell, Broadwell, and newer not supported optimized not supported
Intel AVX-512 Skylake, Cannon Lake, and newer optimized optimized emulated
Intel AVX-512 VNNI (DL Boost) Cascade Lake, Ice Lake, Cooper Lake, Tiger Lake optimized optimized optimized

Compatibility

The DeepSparse Engine ingests models in the ONNX format, allowing for compatibility with PyTorch, TensorFlow, Keras, and many other frameworks that support it. This reduces the extra work of preparing your trained model for inference to just one step of exporting.

Quick Tour

To expedite inference and benchmarking on real models, we include the sparsezoo package. SparseZoo hosts inference-optimized models, trained on repeatable sparsification recipes using state-of-the-art techniques from SparseML.

Quickstart with SparseZoo ONNX Models

ResNet-50 Dense

Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense ResNet-50 from SparseZoo.

from deepsparse import compile_model
from sparsezoo.models import classification

batch_size = 64

# Download model and compile as optimized executable for your machine
model = classification.resnet_50()
engine = compile_model(model, batch_size=batch_size)

# Fetch sample input and predict output using engine
inputs = model.data_inputs.sample_batch(batch_size=batch_size)
outputs, inference_time = engine.timed_run(inputs)

ResNet-50 Sparsified

When exploring available optimized models, you can use the Zoo.search_optimized_models utility to find models that share a base.

Try this on the dense ResNet-50 to see what is available:

from sparsezoo import Zoo
from sparsezoo.models import classification

model = classification.resnet_50()
print(Zoo.search_sparse_models(model))

Output:

[
    Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-conservative), 
    Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-moderate), 
    Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate), 
    Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet-augmented/pruned_quant-aggressive)
]

We can see there are two pruned versions targeting FP32 and two pruned, quantized versions targeting INT8. The conservative, moderate, and aggressive tags recover to 100%, >=99%, and <99% of baseline accuracy respectively.

For a version of ResNet-50 that recovers close to the baseline and is very performant, choose the pruned_quant-moderate model. This model will run nearly 7x faster than the baseline model on a compatible CPU (with the VNNI instruction set enabled). For hardware compatibility, see the Hardware Support section.

from deepsparse import compile_model
import numpy

batch_size = 64
sample_inputs = [numpy.random.randn(batch_size, 3, 224, 224).astype(numpy.float32)]

# run baseline benchmarking
engine_base = compile_model(
    model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none", 
    batch_size=batch_size,
)
benchmarks_base = engine_base.benchmark(sample_inputs)
print(benchmarks_base)

# run sparse benchmarking
engine_sparse = compile_model(
    model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate", 
    batch_size=batch_size,
)
if not engine_sparse.cpu_vnni:
    print("WARNING: VNNI instructions not detected, quantization speedup not well supported")
benchmarks_sparse = engine_sparse.benchmark(sample_inputs)
print(benchmarks_sparse)

print(f"Speedup: {benchmarks_sparse.items_per_second / benchmarks_base.items_per_second:.2f}x")

Quickstart with Custom ONNX Models

We accept ONNX files for custom models, too. Simply plug in your model to compare performance with other solutions.

> wget https://github.com/onnx/models/raw/main/vision/classification/mobilenet/model/mobilenetv2-7.onnx
Saving to: ‘mobilenetv2-7.onnx’
from deepsparse import compile_model
from deepsparse.utils import generate_random_inputs
onnx_filepath = "mobilenetv2-7.onnx"
batch_size = 16

# Generate random sample input
inputs = generate_random_inputs(onnx_filepath, batch_size)

# Compile and run
engine = compile_model(onnx_filepath, batch_size)
outputs = engine.run(inputs)

Compatibility/Support Notes

  • ONNX version 1.5-1.7
  • ONNX opset version 11+
  • ONNX IR version has not been tested at this time

For a more in-depth read on available APIs and workflows, check out the examples and DeepSparse Engine documentation.

Resources

Learning More

Release History

Official builds are hosted on PyPI

Additionally, more information can be found via GitHub Releases.

License

The project's binary containing the DeepSparse Engine is licensed under the Neural Magic Engine License.

Example files and scripts included in this repository are licensed under the Apache License Version 2.0 as noted.

Community

Contribute

We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Join

For user help or questions about DeepSparse, sign up or log in to our Deep Sparse Community Slack. We are growing the community member by member and happy to see you there. Bugs, feature requests, or additional questions can also be posted to our GitHub Issue Queue.

You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by subscribing to the Neural Magic community.

For more general questions about Neural Magic, please fill out this form.

Cite

Find this project useful in your research or other communications? Please consider citing:

@InProceedings{
    pmlr-v119-kurtz20a, 
    title = {Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks}, 
    author = {Kurtz, Mark and Kopinsky, Justin and Gelashvili, Rati and Matveev, Alexander and Carr, John and Goin, Michael and Leiserson, William and Moore, Sage and Nell, Bill and Shavit, Nir and Alistarh, Dan}, 
    booktitle = {Proceedings of the 37th International Conference on Machine Learning}, 
    pages = {5533--5543}, 
    year = {2020}, 
    editor = {Hal Daumé III and Aarti Singh}, 
    volume = {119}, 
    series = {Proceedings of Machine Learning Research}, 
    address = {Virtual}, 
    month = {13--18 Jul}, 
    publisher = {PMLR}, 
    pdf = {http://proceedings.mlr.press/v119/kurtz20a/kurtz20a.pdf},
    url = {http://proceedings.mlr.press/v119/kurtz20a.html}, 
    abstract = {Optimizing convolutional neural networks for fast inference has recently become an extremely active area of research. One of the go-to solutions in this context is weight pruning, which aims to reduce computational and memory footprint by removing large subsets of the connections in a neural network. Surprisingly, much less attention has been given to exploiting sparsity in the activation maps, which tend to be naturally sparse in many settings thanks to the structure of rectified linear (ReLU) activation functions. In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains. To induce highly sparse activation maps without accuracy loss, we introduce a new regularization technique, coupled with a new threshold-based sparsification method based on a parameterized activation function called Forced-Activation-Threshold Rectified Linear Unit (FATReLU). We examine the impact of our methods on popular image classification models, showing that most architectures can adapt to significantly sparser activation maps without any accuracy loss. Our second contribution is showing that these these compression gains can be translated into inference speedups: we provide a new algorithm to enable fast convolution operations over networks with sparse activations, and show that it can enable significant speedups for end-to-end inference on a range of popular models on the large-scale ImageNet image classification task on modern Intel CPUs, with little or no retraining cost.} 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepsparse-0.11.0.tar.gz (38.4 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

deepsparse-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.7 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

deepsparse-0.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.7 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

deepsparse-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.7 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ x86-64

deepsparse-0.11.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.7 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.17+ x86-64

File details

Details for the file deepsparse-0.11.0.tar.gz.

File metadata

  • Download URL: deepsparse-0.11.0.tar.gz
  • Upload date:
  • Size: 38.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.24.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.10

File hashes

Hashes for deepsparse-0.11.0.tar.gz
Algorithm Hash digest
SHA256 8c21801c7e3b701156d4debf6d824f16622030e2ac0be82b1091dd53b3038a8b
MD5 1a5e7b5701663152cf02a844cb7760b9
BLAKE2b-256 16ba696c0cfb9aa47a617373153665ddd4c6f6e5b67ef01ba7f911d66549656e

See more details on using hashes here.

File details

Details for the file deepsparse-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for deepsparse-0.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 893a98e668139076ab4225f456162c5a969bd83f857790c166d7860928486b18
MD5 b7f8ee87a53f802f84ee2e052bb89666
BLAKE2b-256 d76bc38f1ff047aab658fcb7e2ae6223ba5ea92572871b78c8b6d6f76aadbb5d

See more details on using hashes here.

File details

Details for the file deepsparse-0.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for deepsparse-0.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5bf5964086c7449285a72f937537182ab4ea77b698b6b7db35b7074ef84e63d7
MD5 d4954785d3b369067aea3578428bed76
BLAKE2b-256 b7fb7217e5fb019a8f492aef3c065ccbf8b0b707eab9a8d29322bc503fb2d246

See more details on using hashes here.

File details

Details for the file deepsparse-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for deepsparse-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 dca4017cf041f506f5cdf67e451bfc83b75a84f5ded52b1e4a7454ec7d630c4e
MD5 8e55582663f60e78be820775ff78cbb5
BLAKE2b-256 9f1c7f327a88fe5b9ed701e836b97461005d7c71d9ea4f0c9813ec2c06eb2ed6

See more details on using hashes here.

File details

Details for the file deepsparse-0.11.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for deepsparse-0.11.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 81281168f76f1cf73c5c2ed6586958fd07c3d7d1c45de2469b778298b405c5cd
MD5 d2b0be71e7cefa6bffd709eebf1bbf0c
BLAKE2b-256 e979b22058ee890351777d40b46dece0007d4d5a1cf7be11790cdde03440c7f1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page