Skip to main content

Intel Extension for PyTorch

Reason this release was yanked:

for security.

Project description

Intel® Extension for PyTorch*

Intel® Extension for PyTorch* extends PyTorch with optimizations for extra performance boost on Intel hardware. Most of the optimizations will be included in stock PyTorch releases eventually, and the intention of the extension is to deliver up-to-date features and optimizations for PyTorch on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).

Intel® Extension for PyTorch* is loaded as a Python module for Python programs or linked as a C++ library for C++ programs. Users can enable it dynamically in script by importing intel_extension_for_pytorch. It covers optimizations for both imperative mode and graph mode. Optimized operators and kernels are registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel hardware. During execution, Intel® Extension for PyTorch* intercepts invocation of ATen operators, and replace the original ones with these optimized ones. In graph mode, further operator fusions are applied manually by Intel engineers or through a tool named oneDNN Graph to reduce operator/kernel invocation overheads, and thus increase performance.

More detailed tutorials are available at Intel® Extension for PyTorch* online document website.

Installation

You can use either of the following 2 commands to install Intel® Extension for PyTorch*.

python -m pip install intel_extension_for_pytorch
python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable

Note: Intel® Extension for PyTorch* has PyTorch version requirement. Please check more detailed information via the URL below.

More installation methods can be found at Installation Guide

Getting Started

Minor code changes are required for users to get start with Intel® Extension for PyTorch*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.

The following code snippet shows an inference code with FP32 data type. More examples, including training and C++ examples, are available at Example page.

import torch
import torchvision.models as models

model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)

import intel_extension_for_pytorch as ipex
model = model.to(memory_format=torch.channels_last)
model = ipex.optimize(model)
data = data.to(memory_format=torch.channels_last)

with torch.no_grad():
  model(data)

Model Zoo

Use cases that had already been optimized by Intel engineers are available at Model Zoo for Intel® Architecture. A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running scipts in the Model Zoo.

License

Apache License, Version 2.0. As found in LICENSE file.

Security

See Intel's Security Center for information on how to report a potential security issue or vulnerability.

See also: Security Policy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file intel_extension_for_pytorch-1.12.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.12.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e7912ad9a553c63daf9279539975c533e5f047bc8f9abb52fd2d7f4ce252296f
MD5 3d442fb01b19f2fc51fc9c499c98a496
BLAKE2b-256 8dca646b8775188e186f10214160d9313b167d48ecc6b9c38411eac72571bc78

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.12.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.12.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6f14f876201b0b4c42016ecd7d09ef1e7c23d8ba0387286b73f3e69fe6302c35
MD5 f0890f8b11350ca56a2b5bf2a365dcb4
BLAKE2b-256 8e26126d7fdd0f3da7379a95648310a548f548445398821f1402f09c4105b0b1

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.12.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.12.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d22a6d4948d0a853ab245ba835febf6667066fb6af8f36e15c2672ad34ae3128
MD5 b3a0e3a3ecd09a99de9c8f243c22a8db
BLAKE2b-256 fa2aa16022066884895e00476f396175f8bb23d71cc66799296b36cd6c4fe977

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.12.0-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.12.0-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1c63e5f74ffba7a21f48cb7ed315de425bae87fa35a2d389c05ebea58d262cab
MD5 d9f2e8fc62de5dd91391657fdc28ac67
BLAKE2b-256 97dba7212f5809f33a10c1434a00fbea304aafc2090493e36e46c4a2ab80af78

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page