Optimum Habana is the interface between the Hugging Face Transformers library and Habana Gaudi Processor (HPU). It provides a set of tools enabling easy model loading and fine-tuning on single- and multi-HPU settings for different downstream tasks.
Project description
Optimum Habana
🤗 Optimum Habana is the interface between the 🤗 Transformers library and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy model loading and fine-tuning on single- and multi-HPU settings for different downstream tasks. The current release focuses on question answering and text classification and enables users to try other models for other tasks with only a few changes.
What is a Habana Processing Unit (HPU)?
Quote from the Hugging Face blog post:
Habana Gaudi training solutions, which power Amazon’s EC2 DL1 instances and Supermicro’s X12 Gaudi AI Training Server, deliver price/performance up to 40% lower than comparable training solutions and enable customers to train more while spending less. The integration of ten 100 Gigabit Ethernet ports onto every Gaudi processor enables system scaling from 1 to thousands of Gaudis with ease and cost-efficiency. Habana’s SynapseAI® is optimized—at inception—to enable Gaudi performance and usability, supports TensorFlow and PyTorch frameworks, with a focus on computer vision and natural language processing applications.
Install
To install the latest release of this package:
pip install optimum[habana]
Optimum Habana is a fast-moving project, and you may want to install it from source:
pip install git+https://github.com/huggingface/optimum-habana.git
Last but not least, don't forget to install requirements for every example:
cd <example-folder> pip install -r requirements.txt
Alternatively, you can install the package without pip as follows:
git clone https://github.com/huggingface/optimum-habana.git cd optimum-habana python setup.py install
How to use it?
🤗 Optimum Habana was designed with one goal in mind: make training and evaluation straightforward for any 🤗 Transformers user while leveraging the complete power of Gaudi processors. There are two main classes one needs to know:
- GaudiTrainer: the trainer class that takes care of compiling (lazy or eager mode) and distributing the model to run on HPUs, and of performing traning and evaluation.
- GaudiConfig: the class that enables to configure Habana Mixed Precision and to decide whether optimized operators and optimizers should be used or not.
The GaudiTrainer
is very similar to the 🤗 Transformers Trainer, and adapting a script using the Trainer to make it work with Gaudi will mostly consist in simply swapping the Trainer
class for the GaudiTrainer
one.
That's how most of the example scripts were adapted from their original counterparts.
Original script:
from transformers import Trainer, TrainingArguments
# A lot of code here
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args, # Original training arguments.
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
Transformed version that can run on Gaudi:
from optimum.habana import GaudiConfig, GaudiTrainer, GaudiTrainingArguments
# A lot of the same code as the original script here
# Loading the GaudiConfig needed by the GaudiTrainer to fine-tune the model on HPUs
gaudi_config = GaudiConfig.from_pretrained(
training_args.gaudi_config_name,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# Initialize our Trainer
trainer = GaudiTrainer(
model=model,
gaudi_config=gaudi_config,
# The training arguments differ a bit from the original ones, that is why we use GaudiTrainingArguments
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
where training_args.gaudi_config_name
is the name of a model from the Hub (Gaudi configurations are stored in model repositories). You can also give the path to a custom Gaudi configuration written in a JSON file such as this one:
{
"use_habana_mixed_precision": true,
"hmp_opt_level": "O1",
"hmp_is_verbose": false,
"use_fused_adam": true,
"use_fused_clip_norm": true,
"hmp_bf16_ops": [
"add",
"addmm",
"bmm",
"div",
"dropout",
"gelu",
"iadd",
"linear",
"layer_norm",
"matmul",
"mm",
"rsub",
"softmax",
"truediv"
],
"hmp_fp32_ops": [
"embedding",
"nll_loss",
"log_softmax"
]
}
Validated Models
The following model architectures, tasks and device distributions have been validated for 🤗 Optimum Habana:
Text Classification | Question Answering | Single Card | Multi Card | |
---|---|---|---|---|
BERT | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
RoBERTa | ✗ | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
ALBERT | ✗ | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
DistilBERT | ✗ | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Other models and tasks supported by the 🤗 Transformers library may also work. You can refer to this section for using them with 🤗 Optimum Habana. Besides, this page explains how to modify any example from the 🤗 Transformers library to make it work with 🤗 Optimum Habana.
If you find any issue while using those, please open an issue or a pull request.
Gaudi Setup
Please refer to Habana Gaudi's official installation guide.
Tests should be run in a Docker container based on Habana Docker images.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for optimum_habana-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 93626c83d2c45d5358a74755d1bbf10a38dffa8fb191a6cac3227cc009324459 |
|
MD5 | 1eccf19f5eea20d6282fc551936db30a |
|
BLAKE2b-256 | cedd1fc0859462a585791fe842af7c324cb4745a45b34702f76ef265df2935d8 |