Skip to main content

Intel® Extension for PyTorch*

Reason this release was yanked:

for security.

Project description

Intel® Extension for PyTorch*

Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.

Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with TorchScript whenever your workload supports it. You could choose to run with torch.jit.trace() function or torch.jit.script() function, but based on our evaluation, torch.jit.trace() supports more workloads so we recommend you to use torch.jit.trace() as your first choice.

The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing intel_extension_for_pytorch.

In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*. Check LLM optimizations for details.

  • Check CPU tutorial for detailed information of Intel® Extension for PyTorch* for Intel® CPUs. Source code is available at the master branch.
  • Check GPU tutorial for detailed information of Intel® Extension for PyTorch* for Intel® GPUs. Source code is available at the xpu-master branch.

Installation

CPU version

You can use either of the following 2 commands to install Intel® Extension for PyTorch* CPU version.

python -m pip install intel_extension_for_pytorch
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu

Note: Intel® Extension for PyTorch* has PyTorch version requirement. Please check more detailed information via the URL below.

More installation methods can be found at CPU Installation Guide.

Compilation instruction of the latest CPU code base master branch can be found at Installation Guide.

GPU version

You can install Intel® Extension for PyTorch* for GPU via command below.

python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu

Note: The patched PyTorch 2.0.1a0 is required to work with Intel® Extension for PyTorch* on Intel® graphics card for now.

More installation methods can be found at GPU Installation Guide.

Compilation instruction of the latest GPU code base xpu-master branch can be found at Installation Guide.

Getting Started

Minor code changes are required for users to get start with Intel® Extension for PyTorch*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.

The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at CPU Example page. More examples on GPU are available at GPU Example page.

NOTE: More detailed information about torch.compile() with ipex backend can be found at Tutorial features page.

Inference on CPU

import torch
import torchvision.models as models

model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)

import intel_extension_for_pytorch as ipex
model = ipex.optimize(model)

with torch.no_grad():
  model(data)

Inference on GPU

import torch
import torchvision.models as models

model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)

import intel_extension_for_pytorch as ipex
model = model.to('xpu')
data = data.to('xpu')
model = ipex.optimize(model)

with torch.no_grad():
  model(data)

Model Zoo

Use cases that had already been optimized by Intel engineers are available at Model Zoo for Intel® Architecture. A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running scipts in the Model Zoo.

License

Apache License, Version 2.0. As found in LICENSE file.

Security

See Intel's Security Center for information on how to report a potential security issue or vulnerability.

See also: Security Policy

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file intel_extension_for_pytorch-2.1.0-cp311-cp311-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-2.1.0-cp311-cp311-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6d28cba6fda34f6d3e32d3c306bc302bcc024cf4fd39544ffb8e5c0259d36dac
MD5 27adeb0c1d243016a08874d15b9cfae5
BLAKE2b-256 35cf83e01fe43576314db4993088b0603725d9213801747fca91c481ae110af1

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-2.1.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-2.1.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 409f48afbd42d3b02960f8d7a39b1ccbd44c357a0cf380f7abd376741ee0ab8d
MD5 48b52690526217a5494c130ca6876289
BLAKE2b-256 21ae463530edc41f8e32ed97c7e31c8c801da9d7cdd6681f9f4d203a9fcd842a

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-2.1.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-2.1.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bb1f7741cf19a5481c6a26e1cf1c4f591a95dd86b524c03354cce7e40f972c59
MD5 027a84b9802637d70b50f8fd94ab226e
BLAKE2b-256 159fa1851017352146024001ed629200ed3554d11a48cca52446d63852bcbd2d

See more details on using hashes here.

File details

Details for the file intel_extension_for_pytorch-2.1.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for intel_extension_for_pytorch-2.1.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 d9e90dc015b4ede83aeebb927382c21f4a1345bb6577a29dac787d69de3bbc61
MD5 598939740b1edc1780c8649a74ca5bfa
BLAKE2b-256 362b22adaa5698ff77c37cbb07522f882dc1bb2aa061256e5f6c0df687ed8c3f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page