Skip to main content

Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.

Project description

Optimum Intel

🤗 Optimum Intel is the interface between the 🤗 Transformers and Diffusers libraries and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures.

OpenVINO is an open-source toolkit that enables high performance inference capabilities for Intel CPUs, GPUs, and special DL inference accelerators (see the full list of supported devices). It is supplied with a set of tools to optimize your models with compression techniques such as quantization, pruning and knowledge distillation. Optimum Intel provides a simple interface to optimize your Transformers and Diffusers models, convert them to the OpenVINO Intermediate Representation (IR) format and run inference using OpenVINO Runtime.

Installation

To install the latest release of 🤗 Optimum Intel with the corresponding required dependencies, you can use pip as follows:

python -m pip install --upgrade-strategy eager "optimum-intel[openvino]"

The --upgrade-strategy eager option is needed to ensure optimum-intel is upgraded to the latest version.

We recommend creating a virtual environment and upgrading pip with python -m pip install --upgrade pip.

Optimum Intel is a fast-moving project, and you may want to install from source with the following command:

python -m pip install "optimum-intel[openvino]"@git+https://github.com/huggingface/optimum-intel.git

Quick tour

OpenVINO

Below are examples of how to use OpenVINO and its NNCF framework to accelerate inference.

Export:

It is also possible to export your model to the OpenVINO IR format with the CLI :

optimum-cli export openvino --model meta-llama/Meta-Llama-3-8B ov_llama/

You can also apply 8-bit weight-only quantization when exporting your model : the model linear, embedding and convolution weights will be quantized to INT8, the activations will be kept in floating point precision.

optimum-cli export openvino --model meta-llama/Meta-Llama-3-8B --weight-format int8 ov_llama_int8/

Quantization in hybrid mode can be applied to Stable Diffusion pipeline during model export. This involves applying hybrid post-training quantization to the UNet model and weight-only quantization for the rest of the pipeline components. In the hybrid mode, weights in MatMul and Embedding layers are quantized, as well as activations of other layers.

optimum-cli export openvino --model stabilityai/stable-diffusion-2-1 --dataset conceptual_captions --weight-format int8 ov_model_sd/

To apply quantization on both weights and activations, you can find more information in the documentation.

Inference:

To load a model and run inference with OpenVINO Runtime, you can just replace your AutoModelForXxx class with the corresponding OVModelForXxx class.

- from transformers import AutoModelForSeq2SeqLM
+ from optimum.intel import OVModelForSeq2SeqLM
  from transformers import AutoTokenizer, pipeline

  model_id = "echarlaix/t5-small-openvino"
- model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
+ model = OVModelForSeq2SeqLM.from_pretrained(model_id)
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  pipe = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer)
  results = pipe("He never went out without a book under his arm, and he often came back with two.")

  [{'translation_text': "Il n'est jamais sorti sans un livre sous son bras, et il est souvent revenu avec deux."}]

Quantization:

Post-training static quantization can also be applied. Here is an example on how to apply static quantization on a Whisper model using the LibriSpeech dataset for the calibration step.

from optimum.intel import OVModelForSpeechSeq2Seq, OVQuantizationConfig

model_id = "openai/whisper-tiny"
q_config = OVQuantizationConfig(dtype="int8", dataset="librispeech", num_samples=50)
q_model = OVModelForSpeechSeq2Seq.from_pretrained(model_id, quantization_config=q_config)

# The directory where the quantized model will be saved
save_dir = "nncf_results"
q_model.save_pretrained(save_dir)

You can find more information in the documentation.

Running the examples

Check out the notebooks directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.

Do not forget to install requirements for every example:

cd <example-folder>
pip install -r requirements.txt

Gaudi

To train your model on Intel Gaudi AI Accelerators (HPU), check out Optimum Habana which provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. After training your model, feel free to submit it to the Intel leaderboard which is designed to evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardwares. Models submitted to the leaderboard will be evaluated on the Intel Developer Cloud. The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the Eleuther AI Language Model Evaluation Harness.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum_intel-1.27.0.tar.gz (332.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

optimum_intel-1.27.0-py3-none-any.whl (371.9 kB view details)

Uploaded Python 3

File details

Details for the file optimum_intel-1.27.0.tar.gz.

File metadata

  • Download URL: optimum_intel-1.27.0.tar.gz
  • Upload date:
  • Size: 332.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.16

File hashes

Hashes for optimum_intel-1.27.0.tar.gz
Algorithm Hash digest
SHA256 06c2b38c90912d231677118888388b8e8b073f8bd240e9e1279f48708341b3de
MD5 2380166b686ffeb532b008292a7e1da3
BLAKE2b-256 0e548fc32bb4efc0dd03d4cbb5e3d42ddb36182c3c96a73ca3b64e7a40c6153e

See more details on using hashes here.

File details

Details for the file optimum_intel-1.27.0-py3-none-any.whl.

File metadata

  • Download URL: optimum_intel-1.27.0-py3-none-any.whl
  • Upload date:
  • Size: 371.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.16

File hashes

Hashes for optimum_intel-1.27.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a999059367a131a419c85bc24978c89969612a8994df0d87412d04c5a2c18fe7
MD5 960a543d2fd347e3f66522ef62a494a4
BLAKE2b-256 e7f95f11670f3f92a9ec23e100c44abb65361045d118e0b7d2989a74598fbd62

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page