Skip to main content

Intel® Extension for PyTorch*

Project description

Intel® Extension for PyTorch*

Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.

Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with TorchScript whenever your workload supports it. You could choose to run with torch.jit.trace() function or torch.jit.script() function, but based on our evaluation, torch.jit.trace() supports more workloads so we recommend you to use torch.jit.trace() as your first choice.

The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing intel_extension_for_pytorch.

  • Check CPU tutorial for detailed information of Intel® Extension for PyTorch* for Intel® CPUs. Source code is available at the master branch.
  • Check GPU tutorial for detailed information of Intel® Extension for PyTorch* for Intel® GPUs. Source code is available at the xpu-master branch.

Installation

CPU version

You can use either of the following 2 commands to install Intel® Extension for PyTorch* CPU version.

python -m pip install intel_extension_for_pytorch
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu

Note: Intel® Extension for PyTorch* has PyTorch version requirement. Please check more detailed information via the URL below.

More installation methods can be found at CPU Installation Guide

GPU version

You can install Intel® Extension for PyTorch* for GPU via command below.

python -m pip install torch==1.13.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu
python -m pip install intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu

Note: The patched PyTorch 1.13.0a0 is required to work with Intel® Extension for PyTorch* on Intel® graphics card for now.

More installation methods can be found at GPU Installation Guide

Getting Started

Minor code changes are required for users to get start with Intel® Extension for PyTorch*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.

The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at CPU Example page. More examples on GPU are available at GPU Example page.

Inference on CPU

import torch
import torchvision.models as models

model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)

import intel_extension_for_pytorch as ipex
model = ipex.optimize(model)

with torch.no_grad():
  model(data)

Inference on GPU

import torch
import torchvision.models as models

model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)

import intel_extension_for_pytorch as ipex
model = model.to('xpu')
data = data.to('xpu')
model = ipex.optimize(model)

with torch.no_grad():
  model(data)

Model Zoo

Use cases that had already been optimized by Intel engineers are available at Model Zoo for Intel® Architecture. A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running scipts in the Model Zoo.

License

Apache License, Version 2.0. As found in LICENSE file.

Security

See Intel's Security Center for information on how to report a potential security issue or vulnerability.

See also: Security Policy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

File details

Details for the file intel_extension_for_pytorch-1.13.100-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.13.100-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 07d25ade2fdfc8c2bc24e594f9e59c9b41aca7a0ca9dceacae3a5d984aeb8065
MD5 9e50978bab2aa67a0b22b258e1cef522
BLAKE2b-256 8b492c6f7331690ffa9c81be94ffa98308857bf1f6a9cb27878d536a6b109946

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.13.100-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.13.100-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 16c108e891c03e28978da3802f2f38e50c01946e963e4f3eb2cf4a005bcf7424
MD5 864457408f8e74c0866b6fe8f97f52db
BLAKE2b-256 3bde477cdb17942f18ee219ffa5e8d2676c6048eff95f45cc8a8124c612e2b9f

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.13.100-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.13.100-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2dd071887f44e2a2befc38b9d901fe5a8baf474caff771e4706eba1561a0136b
MD5 3a39b5eed04b357e069a798b5c3fa401
BLAKE2b-256 80e28d8fd1647b06cc4fbb3efa5aa6de760ced7e35b93daf4a1cffafb6be09ec

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-1.13.100-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-1.13.100-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 5a11dc572ca7a0ca3a545d19bc6bf13b83255c0e126ec37b537684c15fa28b77
MD5 4dab7765f5e19e5e2cf5d47fef74e791
BLAKE2b-256 224264929091a5c1dfd3b49295da86b6b11205b4fd815af987f1722bad9fd912

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page