Skip to main content

Optimum Habana is the interface between the Hugging Face Transformers library and Habana Gaudi Processor (HPU). It provides a set of tools enabling easy model loading and training on single- and multi-HPU settings for different downstream tasks.

Project description

Optimum Habana

🤗 Optimum Habana is the interface between the 🤗 Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.

What is a Habana Processing Unit (HPU)?

Quote from the Hugging Face blog post:

Habana Gaudi training solutions, which power Amazon’s EC2 DL1 instances and Supermicro’s X12 Gaudi AI Training Server, deliver price/performance up to 40% lower than comparable training solutions and enable customers to train more while spending less. The integration of ten 100 Gigabit Ethernet ports onto every Gaudi processor enables system scaling from 1 to thousands of Gaudis with ease and cost-efficiency. Habana’s SynapseAI® is optimized—at inception—to enable Gaudi performance and usability, supports TensorFlow and PyTorch frameworks, with a focus on computer vision and natural language processing applications.

Install

To install the latest release of this package:

pip install optimum[habana]

To use DeepSpeed on HPUs, you also need to run the following command:

pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.7.0

Optimum Habana is a fast-moving project, and you may want to install it from source:

pip install git+https://github.com/huggingface/optimum-habana.git

Alternatively, you can install the package without pip as follows:

git clone https://github.com/huggingface/optimum-habana.git
cd optimum-habana
python setup.py install

Last but not least, don't forget to install requirements for every example:

cd <example-folder>
pip install -r requirements.txt

How to use it?

Quick Start

🤗 Optimum Habana was designed with one goal in mind: make training and evaluation straightforward for any 🤗 Transformers and 🤗 Diffusers user while leveraging the complete power of Gaudi processors.

Transformers Interface

There are two main classes one needs to know:

  • GaudiTrainer: the trainer class that takes care of compiling (lazy or eager mode) and distributing the model to run on HPUs, and of performing traning and evaluation.
  • GaudiConfig: the class that enables to configure Habana Mixed Precision and to decide whether optimized operators and optimizers should be used or not.

The GaudiTrainer is very similar to the 🤗 Transformers Trainer, and adapting a script using the Trainer to make it work with Gaudi will mostly consist in simply swapping the Trainer class for the GaudiTrainer one. That's how most of the example scripts were adapted from their original counterparts.

Original script:

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
  # training arguments...
)

# A lot of code here

# Initialize our Trainer
trainer = Trainer(
    model=model,
    args=training_args,  # Original training arguments.
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

Transformed version that can run on Gaudi:

from optimum.habana import GaudiConfig, GaudiTrainer, GaudiTrainingArguments

training_args = GaudiTrainingArguments(
  # same training arguments...
  use_habana=True,
  use_lazy_mode=True,  # whether to use lazy or eager mode
  gaudi_config_name=path_to_gaudi_config,
)

# A lot of the same code as the original script here

# Initialize our Trainer
trainer = GaudiTrainer(
    model=model,
    # You can manually specify the Gaudi configuration to use with
    # gaudi_config=my_gaudi_config
    args=training_args,
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

where gaudi_config_name is the name of a model from the Hub (Gaudi configurations are stored in model repositories). You can also give the path to a custom Gaudi configuration written in a JSON file such as this one:

{
  "use_habana_mixed_precision": true,
  "hmp_opt_level": "O1",
  "hmp_is_verbose": false,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "hmp_bf16_ops": [
    "add",
    "addmm",
    "bmm",
    "div",
    "dropout",
    "gelu",
    "iadd",
    "linear",
    "layer_norm",
    "matmul",
    "mm",
    "rsub",
    "softmax",
    "truediv"
  ],
  "hmp_fp32_ops": [
    "embedding",
    "nll_loss",
    "log_softmax"
  ]
}

If you prefer to instantiate a Gaudi configuration to work on it before giving it to the trainer, you can do it as follows:

gaudi_config = GaudiConfig.from_pretrained(
    gaudi_config_name,
    cache_dir=model_args.cache_dir,
    revision=model_args.model_revision,
    use_auth_token=True if model_args.use_auth_token else None,
)

Diffusers Interface

You can generate images from prompts using Stable Diffusion on Gaudi using the GaudiStableDiffusionPipeline class and the [GaudiDDIMScheduler] that have been both optimized for HPUs. Here is how to use them and the differences with the 🤗 Diffusers library:

- from diffusers import DDIMScheduler, StableDiffusionPipeline
+ from optimum.habana import GaudiConfig
+ from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline


model_name = "CompVis/stable-diffusion-v1-4"

- scheduler = DDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
+ scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

- pipeline = StableDiffusionPipeline.from_pretrained(
+ pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
+   use_habana=True,
+   use_hpu_graphs=True,
+   gaudi_config="Habana/stable-diffusion",
)

outputs = generator(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=16,
+   batch_size=4,
)

Documentation

Check the documentation of Optimum Habana for more advanced usage.

Validated Models

The following model architectures, tasks and device distributions have been validated for 🤗 Optimum Habana:

Text Classification Question Answering Language Modeling Summarization Translation Image Classification Audio Classification Speech Recognition Single Card Multi Card DeepSpeed
BERT :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
RoBERTa :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
ALBERT :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
DistilBERT :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
GPT2 :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
T5 :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
ViT :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Swin :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Wav2Vec2 :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Stable Diffusion :heavy_check_mark:

Other models and tasks supported by the 🤗 Transformers library may also work. You can refer to this section for using them with 🤗 Optimum Habana. Besides, this page explains how to modify any example from the 🤗 Transformers library to make it work with 🤗 Optimum Habana.

If you find any issue while using those, please open an issue or a pull request.

Gaudi Setup

Please refer to Habana Gaudi's official installation guide.

Tests should be run in a Docker container based on Habana Docker images.

The current version has been validated for SynapseAI 1.7.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum-habana-1.3.2.tar.gz (82.0 kB view hashes)

Uploaded Source

Built Distribution

optimum_habana-1.3.2-py3-none-any.whl (92.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page