Optimum Executorch is an interface between the Hugging Face libraries and ExecuTorch
Project description
🤗 Optimum ExecuTorch
Optimize and deploy Hugging Face models with ExecuTorch
📋 Overview
Optimum ExecuTorch enables efficient deployment of transformer models using Meta's ExecuTorch framework. It provides:
- 🔄 Easy conversion of Hugging Face models to ExecuTorch format
- ⚡ Optimized inference with hardware-specific optimizations
- 🤝 Seamless integration with Hugging Face Transformers
- 📱 Efficient deployment on various devices
⚡ Quick Installation
1. Create a virtual environment
Install conda on your machine. Then, create a virtual environment to manage our dependencies.
conda create -n optimum-executorch python=3.11
conda activate optimum-executorch
2. Install optimum-executorch from source
git clone https://github.com/huggingface/optimum-executorch.git
cd optimum-executorch
pip install '.[dev]'
- 🔜 Install from pypi coming soon...
3. Install dependencies in dev mode
To access every available optimization and experiment with the newest features, run:
python install_dev.py
This script will install executorch, torch, torchao, transformers, etc. from nightly builds or from source to access the latest models and optimizations.
To leave an existing ExecuTorch installation untouched, run install_dev.py with --skip_override_torch to prevent it from being overwritten.
🎯 Quick Start
There are two ways to use Optimum ExecuTorch:
Option 1: Export and Load in One Python API
from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer
# Load and export the model on-the-fly
model_id = "HuggingFaceTB/SmolLM2-135M-Instruct"
model = ExecuTorchModelForCausalLM.from_pretrained(
model_id,
recipe="xnnpack",
attn_implementation="custom_sdpa", # Use custom SDPA implementation for better performance
use_custom_kv_cache=True, # Use custom KV cache for better performance
**{"qlinear": "8da4w", "qembedding": "8w"}, # Quantize linear and embedding layers
)
# Generate text right away
tokenizer = AutoTokenizer.from_pretrained(model_id)
generated_text = model.text_generation(
tokenizer=tokenizer,
prompt="Once upon a time",
max_seq_len=128,
)
print(generated_text)
Note: If an ExecuTorch model is already cached on the Hugging Face Hub, the API will automatically skip the export step and load the cached
.ptefile. To test this, replace themodel_idin the example above with"executorch-community/SmolLM2-135M", where the.ptefile is pre-cached. Additionally, the.ptefile can be directly associated with the eager model, as demonstrated in this example.
Option 2: Export and Load Separately
Step 1: Export your model
Use the CLI tool to convert your model to ExecuTorch format:
optimum-cli export executorch \
--model "HuggingFaceTB/SmolLM2-135M-Instruct" \
--task "text-generation" \
--recipe "xnnpack" \
--use_custom_sdpa \
--use_custom_kv_cache \
--qlinear 8da4w \
--qembedding 8w \
--output_dir="hf_smollm2"
Explore the various export options by running the command: optimum-cli export executorch --help.
To read more about how to export different types of models on Optimum ExecuTorch, please revert to the export README.
Step 2: Validate the Exported Model on Host Using the Python API
Use the exported model for text generation:
from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer
# Load the exported model
model = ExecuTorchModelForCausalLM.from_pretrained("./hf_smollm2")
# Initialize tokenizer and generate text
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
generated_text = model.text_generation(
tokenizer=tokenizer,
prompt="Once upon a time",
max_seq_len=128
)
print(generated_text)
Step 3: Run inference on-device
To perform on-device inference, you can use ExecuTorch’s sample runner or the example iOS/Android applications. For detailed instructions, refer to the ExecuTorch Sample Runner guide.
⚙️ Optimizations
Custom Operators
Optimum transformer models utilize:
- A custom SDPA for CPU based on Flash Attention, boosting performance by around 3x compared to default SDPA.
- A custom KV cache that uses a custom op for efficient in-place cache update on CPU, boosting performance by 2.5x compared to default static KV cache.
Backends Delegation
Currently, Optimum-ExecuTorch supports the XNNPACK Backend for CPU and CoreML Backend for GPU on Apple devices.
For a comprehensive overview of all backends supported by ExecuTorch, please refer to the ExecuTorch Backend Overview.
Quantization
We currently support Post-Training Quantization (PTQ) for linear layers and embeddings using the TorchAO quantization library.
🤗 Supported Models
The following models have been successfully tested with Executorch. For details on the specific optimizations supported and how to use them for each model, please consult their respective test files in the tests/models/ directory.
Text Models
We currently support a wide range of popular transformer models, including encoder-only, decoder-only, and encoder-decoder architectures, as well as models specialized for various tasks like text generation, translation, summarization, and mask prediction, etc. These models reflect the current trends and popularity across the Hugging Face community:
LLMs (Large Language Models)
Decoder-only
- Codegen: Salesforce's
codegen-350M-monoand its variants - Gemma:
Gemma-2band its variants - Gemma2:
Gemma-2-2band its variants - Gemma3:
Gemma-3-1band its variants (💡[NEW] 270M, 1B) - Glm:
glm-edge-1.5band its variants - Gpt2:
gpt-sw3-126mand its variants - GptJ:
gpt-j-405Mand its variants - GptNeoX: EleutherAI's
pythia-14mand its variants - GptNeoXJapanese:
gpt-neox-japanese-2.7band its variants - Granite:
granite-3.3-2b-instructand its variants - Llama:
Llama-3.2-1Band its variants - Mistral:
Ministral-3b-instructand its variants - Qwen2:
Qwen2.5-0.5Band its variants - Qwen3:
Qwen3-0.6B,Qwen3-Embedding-0.6Band other variants - Olmo:
OLMo-1B-hfand its variants - Phi:
JSL-MedPhi2-2.7Band its variants - Phi4:
Phi-4-mini-instructand its variants - Smollm: 🤗
SmolLM2-135Mand its variants - Smollm3: 🤗
SmolLM3-3Band its variants - Starcoder2:
starcoder2-3band its variants
Encoder-decoder (Seq2Seq)
- T5: Google's
T5and its variants
NLU (Natural Language Understanding)
- Albert:
albert-base-v2and its variants - Bert: Google's
bert-base-uncasedand its variants - Distilbert:
distilbert-base-uncasedand its variants - Eurobert:
EuroBERT-210mand its variants - Roberta: FacebookAI's
xlm-roberta-baseand its variants
Vision Models
- Cvt: Convolutional Vision Transformer
- Deit: Distilled Data-efficient Image Transformer (base-sized)
- Dit: Document Image Transformer (base-sized)
- EfficientNet: EfficientNet (b0-b7 sized)
- Focalnet: FocalNet (tiny-sized)
- Mobilevit: Apple's MobileViT xx-small
- Mobilevit2: Apple's MobileViTv2
- Pvt: Pyramid Vision Transformer (tiny-sized)
- Swin: Swin Transformer (tiny-sized)
Audio Models
ASR (Automatic Speech Recognition)
- Whisper: OpenAI's
Whisperand its variants
Speech text-to-text (Automatic Speech Recognition)
- 💡[NEW] Granite Speech:
granite-speech-3.3-2band its variants - 💡[NEW] Voxtral: Mistral's newest speech/text-to-text model
📌 Note: This list is continuously expanding. As we continue to expand support, more models will be added.
🚀 Benchmarks on Mobile Devices
The following benchmarks show example decode performance (tokens/sec) across Android and iOS devices for popular edge LLMs.
| Model | Samsung Galaxy S22 5G (Android 13) |
Samsung Galaxy S22 Ultra 5G (Android 14) |
iPhone 15 (iOS 18.0) |
iPhone 15 Plus (iOS 17.4.1) |
iPhone 15 Pro (iOS 18.4.1) |
|---|---|---|---|---|---|
| SmolLM2-135M | 202.28 | 202.61 | 7.47 | 6.43 | 29.64 |
| Qwen3-0.6B | 59.16 | 56.49 | 7.05 | 5.48 | 17.99 |
| google/gemma-3-1b-it | 25.07 | 23.89 | 21.51 | 21.33 | 17.8 |
| Llama-3.2-1B | 44.91 | 37.39 | 11.04 | 8.93 | 25.78 |
| OLMo-1B | 44.98 | 38.22 | 14.49 | 8.72 | 20.24 |
📊 View Live Benchmarks: Explore comprehensive performance data, compare models across devices, and track performance trends over time on the ExecuTorch Benchmark Dashboard.
Performance measured with custom SDPA, KV-cache optimization, and 8da4w quantization. Results may vary based on device conditions and prompt characteristics.
🛠️ Advanced Usage
Check our ExecuTorch GitHub repo directly for:
- More backends and performance optimization options
- Deployment guides for Android, iOS, and embedded devices
- Additional examples and benchmarks
🤝 Contributing
We love your input! We want to make contributing to Optimum ExecuTorch as easy and transparent as possible. Check out our:
📝 License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
📫 Get in Touch
- Report bugs through GitHub Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file optimum_executorch-0.1.0.tar.gz.
File metadata
- Download URL: optimum_executorch-0.1.0.tar.gz
- Upload date:
- Size: 55.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b8d1d9745bd04bda8965e78cde559eb5c87329f83290933f317aaeb6abb2e418
|
|
| MD5 |
f7d2e30276845dd3259401076c52ea50
|
|
| BLAKE2b-256 |
d00f5512769b522085f5ba40c360b17db237e6237870ca66babb37306533a143
|
File details
Details for the file optimum_executorch-0.1.0-py3-none-any.whl.
File metadata
- Download URL: optimum_executorch-0.1.0-py3-none-any.whl
- Upload date:
- Size: 74.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f829b7a2567ea0be4506dc10826324130ffdb3c76554df230fa22f916c53e059
|
|
| MD5 |
152bc08c57581698de92a69f5019d772
|
|
| BLAKE2b-256 |
61cccf72963647a03351fe9743a92f7b7491c55359caa3397ced9501af54f7c6
|