Intel Extension for PyTorch
Project description
Intel® Extension for PyTorch*
Intel® Extension for PyTorch* extends PyTorch with optimizations for extra performance boost on Intel hardware. Most of the optimizations will be included in stock PyTorch releases eventually, and the intention of the extension is to deliver up-to-date features and optimizations for PyTorch on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).
Intel® Extension for PyTorch* is loaded as a Python module for Python programs or linked as a C++ library for C++ programs. Users can enable it dynamically in script by importing intel_extension_for_pytorch
. It covers optimizations for both imperative mode and graph mode. Optimized operators and kernels are registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel hardware. During execution, Intel® Extension for PyTorch* intercepts invocation of ATen operators, and replace the original ones with these optimized ones. In graph mode, further operator fusions are applied manually by Intel engineers or through a tool named oneDNN Graph to reduce operator/kernel invocation overheads, and thus increase performance.
More detailed tutorials are available at Intel® Extension for PyTorch* online document website.
Installation
You can use either of the following 2 commands to install Intel® Extension for PyTorch*.
python -m pip install intel_extension_for_pytorch
python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable
Note: Intel® Extension for PyTorch* has PyTorch version requirement. Please check more detailed information via the URL below.
More installation methods can be found at Installation Guide
Getting Started
Minor code changes are required for users to get start with Intel® Extension for PyTorch*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
The following code snippet shows an inference code with FP32 data type. More examples, including training and C++ examples, are available at Example page.
import torch
import torchvision.models as models
model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)
import intel_extension_for_pytorch as ipex
model = model.to(memory_format=torch.channels_last)
model = ipex.optimize(model)
data = data.to(memory_format=torch.channels_last)
with torch.no_grad():
model(data)
Model Zoo
Use cases that had already been optimized by Intel engineers are available at Model Zoo for Intel® Architecture. A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running scipts in the Model Zoo.
License
Apache License, Version 2.0. As found in LICENSE file.
Security
See Intel's Security Center for information on how to report a potential security issue or vulnerability.
See also: Security Policy
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
File details
Details for the file intel_extension_for_pytorch-1.12.100-cp310-cp310-manylinux2014_x86_64.whl
.
File metadata
- Download URL: intel_extension_for_pytorch-1.12.100-cp310-cp310-manylinux2014_x86_64.whl
- Upload date:
- Size: 41.2 MB
- Tags: CPython 3.10
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 25f8f0c98b7f9e303223a8ea67080c07c695ffef29b667acbaa84b9550fbd6b9 |
|
MD5 | 8144664124a7c90c8cbd26c6718389da |
|
BLAKE2b-256 | edfefaa8ef0324e48bb26d2785925174d7f37399177d65fcd69c7eefda01bd3a |
File details
Details for the file intel_extension_for_pytorch-1.12.100-cp39-cp39-manylinux2014_x86_64.whl
.
File metadata
- Download URL: intel_extension_for_pytorch-1.12.100-cp39-cp39-manylinux2014_x86_64.whl
- Upload date:
- Size: 36.7 MB
- Tags: CPython 3.9
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e3d0b339cb7a404bc2b2cf6e5487d90641dea863e5c3362a8a905b3526022f2d |
|
MD5 | 3f104c4f1bf9897e39d1209d9a7e60c6 |
|
BLAKE2b-256 | 613ff11ad76b4fa607110910839145f6a42f1da3e18c75bc3e17679351f269c3 |
File details
Details for the file intel_extension_for_pytorch-1.12.100-cp38-cp38-manylinux2014_x86_64.whl
.
File metadata
- Download URL: intel_extension_for_pytorch-1.12.100-cp38-cp38-manylinux2014_x86_64.whl
- Upload date:
- Size: 41.5 MB
- Tags: CPython 3.8
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ff5008b04958e7897569a84474fe155e402517f59487a3d36332860d2144fa4d |
|
MD5 | 1197d3241e3c48661933594c986454ec |
|
BLAKE2b-256 | 28ba75f29bb3ab965c082a7e544729d5148b745e72a6b7d357589a623d59e89b |
File details
Details for the file intel_extension_for_pytorch-1.12.100-cp37-cp37m-manylinux2014_x86_64.whl
.
File metadata
- Download URL: intel_extension_for_pytorch-1.12.100-cp37-cp37m-manylinux2014_x86_64.whl
- Upload date:
- Size: 41.5 MB
- Tags: CPython 3.7m
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 74a1168a8c595cedb0f5d31ead13616291eacdb07d6298ae2cfcb4890e04559d |
|
MD5 | b25729ad0f9916b18d253630da5e98e0 |
|
BLAKE2b-256 | a5c8b73e40d0446706393ce29dcdd2f15fab4d0e69f1172fad9e370c73be3ed7 |