Skip to main content

Optimum Executorch is an interface between the Hugging Face libraries and ExecuTorch

Project description

🤗 Optimum ExecuTorch

Optimize and deploy Hugging Face models with ExecuTorch

Documentation | ExecuTorch | Hugging Face

📋 Overview

Optimum ExecuTorch enables efficient deployment of transformer models using Meta's ExecuTorch framework. It provides:

  • 🔄 Easy conversion of Hugging Face models to ExecuTorch format
  • ⚡ Optimized inference with hardware-specific optimizations
  • 🤝 Seamless integration with Hugging Face Transformers
  • 📱 Efficient deployment on various devices

⚡ Quick Installation

1. Create a virtual environment

Install conda on your machine. Then, create a virtual environment to manage our dependencies.

conda create -n optimum-executorch python=3.11
conda activate optimum-executorch

2. Install optimum-executorch from source

git clone https://github.com/huggingface/optimum-executorch.git
cd optimum-executorch
pip install '.[dev]'
  • 🔜 Install from pypi coming soon...

3. Install dependencies in dev mode

To access every available optimization and experiment with the newest features, run:

python install_dev.py

This script will install executorch, torch, torchao, transformers, etc. from nightly builds or from source to access the latest models and optimizations.

To leave an existing ExecuTorch installation untouched, run install_dev.py with --skip_override_torch to prevent it from being overwritten.

🎯 Quick Start

There are two ways to use Optimum ExecuTorch:

Option 1: Export and Load in One Python API

from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer

# Load and export the model on-the-fly
model_id = "HuggingFaceTB/SmolLM2-135M-Instruct"
model = ExecuTorchModelForCausalLM.from_pretrained(
    model_id,
    recipe="xnnpack",
    attn_implementation="custom_sdpa",  # Use custom SDPA implementation for better performance
    use_custom_kv_cache=True,  # Use custom KV cache for better performance
    **{"qlinear": "8da4w", "qembedding": "8w"},  # Quantize linear and embedding layers
)

# Generate text right away
tokenizer = AutoTokenizer.from_pretrained(model_id)
generated_text = model.text_generation(
    tokenizer=tokenizer,
    prompt="Once upon a time",
    max_seq_len=128,
)
print(generated_text)

Note: If an ExecuTorch model is already cached on the Hugging Face Hub, the API will automatically skip the export step and load the cached .pte file. To test this, replace the model_id in the example above with "executorch-community/SmolLM2-135M", where the .pte file is pre-cached. Additionally, the .pte file can be directly associated with the eager model, as demonstrated in this example.

Option 2: Export and Load Separately

Step 1: Export your model

Use the CLI tool to convert your model to ExecuTorch format:

optimum-cli export executorch \
    --model "HuggingFaceTB/SmolLM2-135M-Instruct" \
    --task "text-generation" \
    --recipe "xnnpack" \
    --use_custom_sdpa \
    --use_custom_kv_cache \
    --qlinear 8da4w \
    --qembedding 8w \
    --output_dir="hf_smollm2"

Explore the various export options by running the command: optimum-cli export executorch --help. To read more about how to export different types of models on Optimum ExecuTorch, please revert to the export README.

Step 2: Validate the Exported Model on Host Using the Python API

Use the exported model for text generation:

from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer

# Load the exported model
model = ExecuTorchModelForCausalLM.from_pretrained("./hf_smollm2")

# Initialize tokenizer and generate text
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
generated_text = model.text_generation(
    tokenizer=tokenizer,
    prompt="Once upon a time",
    max_seq_len=128
)
print(generated_text)

Step 3: Run inference on-device

To perform on-device inference, you can use ExecuTorch’s sample runner or the example iOS/Android applications. For detailed instructions, refer to the ExecuTorch Sample Runner guide.

⚙️ Optimizations

Custom Operators

Optimum transformer models utilize:

  • A custom SDPA for CPU based on Flash Attention, boosting performance by around 3x compared to default SDPA.
  • A custom KV cache that uses a custom op for efficient in-place cache update on CPU, boosting performance by 2.5x compared to default static KV cache.

Backends Delegation

Currently, Optimum-ExecuTorch supports the XNNPACK Backend for CPU and CoreML Backend for GPU on Apple devices.

For a comprehensive overview of all backends supported by ExecuTorch, please refer to the ExecuTorch Backend Overview.

Quantization

We currently support Post-Training Quantization (PTQ) for linear layers and embeddings using the TorchAO quantization library.

🤗 Supported Models

The following models have been successfully tested with Executorch. For details on the specific optimizations supported and how to use them for each model, please consult their respective test files in the tests/models/ directory.

Text Models

We currently support a wide range of popular transformer models, including encoder-only, decoder-only, and encoder-decoder architectures, as well as models specialized for various tasks like text generation, translation, summarization, and mask prediction, etc. These models reflect the current trends and popularity across the Hugging Face community:

LLMs (Large Language Models)

Decoder-only
  • Codegen: Salesforce's codegen-350M-mono and its variants
  • Gemma: Gemma-2b and its variants
  • Gemma2: Gemma-2-2b and its variants
  • Gemma3: Gemma-3-1b and its variants (💡[NEW] 270M, 1B)
  • Glm: glm-edge-1.5b and its variants
  • Gpt2: gpt-sw3-126m and its variants
  • GptJ: gpt-j-405M and its variants
  • GptNeoX: EleutherAI's pythia-14m and its variants
  • GptNeoXJapanese: gpt-neox-japanese-2.7b and its variants
  • Granite: granite-3.3-2b-instruct and its variants
  • Llama: Llama-3.2-1B and its variants
  • Mistral: Ministral-3b-instruct and its variants
  • Qwen2: Qwen2.5-0.5B and its variants
  • Qwen3: Qwen3-0.6B, Qwen3-Embedding-0.6B and other variants
  • Olmo: OLMo-1B-hf and its variants
  • Phi: JSL-MedPhi2-2.7B and its variants
  • Phi4: Phi-4-mini-instruct and its variants
  • Smollm: 🤗 SmolLM2-135M and its variants
  • Smollm3: 🤗 SmolLM3-3B and its variants
  • Starcoder2: starcoder2-3b and its variants
Encoder-decoder (Seq2Seq)
  • T5: Google's T5 and its variants

NLU (Natural Language Understanding)

  • Albert: albert-base-v2 and its variants
  • Bert: Google's bert-base-uncased and its variants
  • Distilbert: distilbert-base-uncased and its variants
  • Eurobert: EuroBERT-210m and its variants
  • Roberta: FacebookAI's xlm-roberta-base and its variants

Vision Models

  • Cvt: Convolutional Vision Transformer
  • Deit: Distilled Data-efficient Image Transformer (base-sized)
  • Dit: Document Image Transformer (base-sized)
  • EfficientNet: EfficientNet (b0-b7 sized)
  • Focalnet: FocalNet (tiny-sized)
  • Mobilevit: Apple's MobileViT xx-small
  • Mobilevit2: Apple's MobileViTv2
  • Pvt: Pyramid Vision Transformer (tiny-sized)
  • Swin: Swin Transformer (tiny-sized)

Audio Models

ASR (Automatic Speech Recognition)

  • Whisper: OpenAI's Whisper and its variants

Speech text-to-text (Automatic Speech Recognition)

  • 💡[NEW] Granite Speech: granite-speech-3.3-2b and its variants
  • 💡[NEW] Voxtral: Mistral's newest speech/text-to-text model

📌 Note: This list is continuously expanding. As we continue to expand support, more models will be added.

🚀 Benchmarks on Mobile Devices

The following benchmarks show example decode performance (tokens/sec) across Android and iOS devices for popular edge LLMs.

Model Samsung Galaxy S22 5G
(Android 13)
Samsung Galaxy S22 Ultra 5G
(Android 14)
iPhone 15
(iOS 18.0)
iPhone 15 Plus
(iOS 17.4.1)
iPhone 15 Pro
(iOS 18.4.1)
SmolLM2-135M 202.28 202.61 7.47 6.43 29.64
Qwen3-0.6B 59.16 56.49 7.05 5.48 17.99
google/gemma-3-1b-it 25.07 23.89 21.51 21.33 17.8
Llama-3.2-1B 44.91 37.39 11.04 8.93 25.78
OLMo-1B 44.98 38.22 14.49 8.72 20.24

📊 View Live Benchmarks: Explore comprehensive performance data, compare models across devices, and track performance trends over time on the ExecuTorch Benchmark Dashboard.

Performance measured with custom SDPA, KV-cache optimization, and 8da4w quantization. Results may vary based on device conditions and prompt characteristics.

🛠️ Advanced Usage

Check our ExecuTorch GitHub repo directly for:

  • More backends and performance optimization options
  • Deployment guides for Android, iOS, and embedded devices
  • Additional examples and benchmarks

🤝 Contributing

We love your input! We want to make contributing to Optimum ExecuTorch as easy and transparent as possible. Check out our:

📝 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

📫 Get in Touch

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum_executorch-1.1.0.tar.gz (61.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

optimum_executorch-1.1.0-py3-none-any.whl (82.4 kB view details)

Uploaded Python 3

File details

Details for the file optimum_executorch-1.1.0.tar.gz.

File metadata

  • Download URL: optimum_executorch-1.1.0.tar.gz
  • Upload date:
  • Size: 61.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for optimum_executorch-1.1.0.tar.gz
Algorithm Hash digest
SHA256 fc14c5d406e5c8048881898175b3313dc26a3379fdec172f7f51ca150ea1d917
MD5 2345511ad235ab354d1c7ccc46b1616b
BLAKE2b-256 f99118578b10d2724fcf9866d00ee0037a62416f8388a8f113926e0e809e525f

See more details on using hashes here.

File details

Details for the file optimum_executorch-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for optimum_executorch-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 56e467fbaaa549af214ae00d830c9f1201ca85feb8b2c1e56a4c190ec993e42c
MD5 d918ddafbda9dc7cef97c4f59c218127
BLAKE2b-256 fb7ba1ca51a336e19050ab8bee64c3f1a24d4b74757a88d92e212cdce2a075b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page