Intel® Extension for PyTorch*
Project description
Intel® Extension for PyTorch*
CPU 💻main branch | 🌱Quick Start | 📖Documentations | 🏃Installation | 💻LLM Example
GPU 💻main branch | 🌱Quick Start | 📖Documentations | 🏃Installation | 💻LLM Example
Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
Intel® Extension for PyTorch* provides optimizations both for eager and graph modes. However, compared to the eager mode, the graph mode in PyTorch* normally yields better performance from the optimization techniques like operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Both PyTorch Torchscript
and TorchDynamo
graph modes are supported. With Torchscript
, we recommend using torch.jit.trace()
as your preferred option, as it generally supports a wider range of workloads compared to torch.jit.script()
.
ipex.llm - Large Language Models (LLMs) Optimization
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*. Check LLM optimizations for details.
Optimized Model List
MODEL FAMILY | MODEL NAME (Huggingface hub) | FP32 | BF16 | Static quantization INT8 | Weight only quantization INT8 | Weight only quantization INT4 |
---|---|---|---|---|---|---|
LLAMA | meta-llama/Llama-2-7b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟨 |
LLAMA | meta-llama/Llama-2-13b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟨 |
LLAMA | meta-llama/Llama-2-70b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟨 |
GPT-J | EleutherAI/gpt-j-6b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
GPT-NEOX | EleutherAI/gpt-neox-20b | 🟩 | 🟨 | 🟨 | 🟩 | 🟨 |
DOLLY | databricks/dolly-v2-12b | 🟩 | 🟨 | 🟨 | 🟩 | 🟨 |
FALCON | tiiuae/falcon-40b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
OPT | facebook/opt-30b | 🟩 | 🟩 | 🟩 | 🟩 | 🟨 |
OPT | facebook/opt-1.3b | 🟩 | 🟩 | 🟩 | 🟩 | 🟨 |
Bloom | bigscience/bloom-1b7 | 🟩 | 🟨 | 🟩 | 🟩 | 🟨 |
CodeGen | Salesforce/codegen-2B-multi | 🟩 | 🟩 | 🟨 | 🟩 | 🟩 |
Baichuan | baichuan-inc/Baichuan2-7B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | |
Baichuan | baichuan-inc/Baichuan2-13B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | |
Baichuan | baichuan-inc/Baichuan-13B-Chat | 🟩 | 🟨 | 🟩 | 🟩 | |
ChatGLM | THUDM/chatglm3-6b | 🟩 | 🟩 | 🟨 | 🟩 | |
ChatGLM | THUDM/chatglm2-6b | 🟩 | 🟩 | 🟨 | 🟩 | |
GPTBigCode | bigcode/starcoder | 🟩 | 🟩 | 🟨 | 🟩 | 🟨 |
T5 | google/flan-t5-xl | 🟩 | 🟩 | 🟨 | 🟩 | |
Mistral | mistralai/Mistral-7B-v0.1 | 🟩 | 🟩 | 🟨 | 🟩 | 🟨 |
MPT | mosaicml/mpt-7b | 🟩 | 🟩 | 🟨 | 🟩 | 🟩 |
-
🟩 signifies that the model can perform well and with good accuracy (<1% difference as compared with FP32).
-
🟨 signifies that the model can perform well while accuracy may not been in a perfect state (>1% difference as compared with FP32).
Note: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and prepacked TPP Linear (fp32/bf16). We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.
Support
The team tracks bugs and enhancement requests using GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.
Intel® AI Reference Models
Use cases that had already been optimized by Intel engineers are available at Intel® AI Reference Models (former Model Zoo). A bunch of PyTorch use cases for benchmarking are also available on the Github page. You can get performance benefits out-of-box by simply running the scripts in the Reference Models.
License
Apache License, Version 2.0. As found in LICENSE file.
Security
See Intel's Security Center for information on how to report a potential security issue or vulnerability.
See also: Security Policy
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for intel_extension_for_pytorch-2.2.0-cp311-cp311-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0af400b37c1a1ea14e32141dcb83e95c41f4e08f36286d7eeeed11c53d8e3385 |
|
MD5 | 1bde6e8e7d25152d16e08bf57a78bbac |
|
BLAKE2b-256 | c93ba8994ca3a43ff9b2c470d0979ee6efc4a3df64e3cb73b1730745355e2d78 |
Hashes for intel_extension_for_pytorch-2.2.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e694657b017394245ff2a8f4ac70f4f191c2478eaedb4a1b9dbec76cc2ece503 |
|
MD5 | f5c1151e6faf10b1a3f24be134084d02 |
|
BLAKE2b-256 | 919090cbbad3a8773e78c348b36e2611221f225e2b3d8d4276259c1bf107220e |
Hashes for intel_extension_for_pytorch-2.2.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3dd4e1c0285b11426a2423505d48bd50c55f23cb36936a58256160d75e7932c6 |
|
MD5 | c8e4f6ac10f790834ac7571ac1f022c4 |
|
BLAKE2b-256 | 60447ff970930c4b95ea1426c57b6408c8ee09cd5f9806b537c3c29cc773bac6 |
Hashes for intel_extension_for_pytorch-2.2.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d687bb76f9e7c3fe318c6a470ffe0239b7608adb53fa1c7676caae74c2dfa1a2 |
|
MD5 | fe5b90b99ad8a4e5a89194b32e67ee4b |
|
BLAKE2b-256 | 26df3fc510e5f569a240791b182bc352d7e06a837a5ba8da88ea29379b535276 |