Skip to main content

Optimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Trainium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks.

Project description

Optimum Neuron

🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including AWS Trainium and AWS Inferentia. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.

Install

To install the latest release of this package:

  • For AWS Trainium (trn1) or AWS inferentia2 (inf2)
pip install --upgrade-strategy eager optimum[neuronx]
  • For AWS inferentia (inf1)
pip install --upgrade-strategy eager optimum[neuron]

Optimum Neuron is a fast-moving project, and you may want to install it from source:

pip install git+https://github.com/huggingface/optimum-neuron.git

Alternatively, you can install the package without pip as follows:

git clone https://github.com/huggingface/optimum-neuron.git
cd optimum-neuron
python setup.py install

Make sure that you have installed the Neuron driver and tools before installing optimum-neuron, more extensive guide here.

Last but not least, don't forget to install the requirements for every example:

cd <example-folder>
pip install -r requirements.txt

Quick Start

🤗 Optimum Neuron was designed with one goal in mind: to make training and inference straightforward for any 🤗 Transformers user while leveraging the complete power of AWS Accelerators.

Training

There are two main classes one needs to know:

  • TrainiumArgumentParser: inherits the original HfArgumentParser in Transformers with additional checks on the argument values to make sure that they will work well with AWS Trainium instances.
  • NeuronTrainer: this version trainer takes care of doing the proper checks and changes to the supported models to make them trainable on AWS Trainium instances.

The NeuronTrainer is very similar to the 🤗 Transformers Trainer, and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer class for the NeuronTrainer one. That's how most of the example scripts were adapted from their original counterparts.

from transformers import TrainingArguments
+from optimum.neuron import NeuronTrainer as Trainer

training_args = TrainingArguments(
  # training arguments...
)

# A lot of code here

# Initialize our Trainer
trainer = Trainer(
    model=model,
    args=training_args,  # Original training arguments.
    train_dataset=train_dataset if training_args.do_train else None,
    eval_dataset=eval_dataset if training_args.do_eval else None,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

Inference

You can compile and export your 🤗 Transformers models to a serialized format before inference on Neuron devices:

optimum-cli export neuron \
  --model distilbert-base-uncased-finetuned-sst-2-english \
  --batch_size 1 \
  --sequence_length 32 \
  --auto_cast matmul \
  --auto_cast_type bf16 \
  distilbert_base_uncased_finetuned_sst2_english_neuron/

The command above will export distilbert-base-uncased-finetuned-sst-2-english with static shapes: batch_size=1 and sequence_length=32, and cast all matmul operations from FP32 to BF16. Check out the exporter guide for more compilation options.

Then you can run the exported Neuron model on Neuron devices with NeuronModelForXXX classes which are similar to AutoModelForXXX classes in 🤗 Transformers:

from transformers import AutoTokenizer
-from transformers import AutoModelForSequenceClassification
+from optimum.neuron import NeuronModelForSequenceClassification

# PyTorch checkpoint
-model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
+model = NeuronModelForSequenceClassification.from_pretrained("distilbert_base_uncased_finetuned_sst2_english_neuron")

tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hamilton is considered to be the best musical of past years.", return_tensors="pt")

logits = model(**inputs).logits
print(model.config.id2label[logits.argmax().item()])
# 'POSITIVE'

Documentation

Check out the documentation of Optimum Neuron for more advanced usage.

If you find any issue while using those, please open an issue or a pull request.

Text-generation-inference

This repository maintains a text-generation-inference (TGI) docker image for deployment on AWS inferentia2.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum_neuron-0.0.25.tar.gz (288.1 kB view details)

Uploaded Source

Built Distribution

optimum_neuron-0.0.25-py3-none-any.whl (358.9 kB view details)

Uploaded Python 3

File details

Details for the file optimum_neuron-0.0.25.tar.gz.

File metadata

  • Download URL: optimum_neuron-0.0.25.tar.gz
  • Upload date:
  • Size: 288.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for optimum_neuron-0.0.25.tar.gz
Algorithm Hash digest
SHA256 601bd69320fffdf0767bfa864441affde9fa50dc7e6b3939e266d955adccba51
MD5 906bb8819fbdc3ae4cebecc9ae2e6c50
BLAKE2b-256 69517f56cbb6b09f7b6d3394a3f83202c201f382e680a3d81642ae0092dff273

See more details on using hashes here.

File details

Details for the file optimum_neuron-0.0.25-py3-none-any.whl.

File metadata

File hashes

Hashes for optimum_neuron-0.0.25-py3-none-any.whl
Algorithm Hash digest
SHA256 73812ef527a95e2618e72fbe0d8abd4760b262e31a3e20c9e2deba2ecfe4403f
MD5 12c29d484fbc087cc7b10404b536ffb2
BLAKE2b-256 00101032be35ffb6f158af815968f9cf15a096790c2690d0567a9cf77e0447b7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page