Popular Machine Learning models optimized for Qualcomm chipsets.
Project description
Qualcomm® AI Hub Models
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for deployment on Qualcomm® devices.
See supported: On-Device Runtimes, Hardware Targets & Precision, Chipsets, Devices
Setup
1. Install Python Package
The package is available via pip:
# NOTE for Snapdragon X Elite users:
# Only AMDx64 (64-bit) Python is supported on Windows.
# Installation will fail when using Windows ARM64 Python.
pip install qai_hub_models
Some models (e.g. YOLOv7) require additional dependencies. View the model README (at qai_hub_models/models/model_id) for installation instructions.
2. Configure AI Hub Workbench Access
Many features of AI Hub Models (such as model compilation, on-device profiling, etc.) require access to Qualcomm® AI Hub Workbench:
- Create a Qualcomm® ID, and use it to login to Qualcomm® AI Hub Workbench.
- Configure your API token:
qai-hub configure --api_token API_TOKEN
Getting Started
Export and Run A Model on a Physical Device
All models in our directory can be compiled and profiled on a hosted Qualcomm® device:
pip install "qai_hub_models[yolov7]"
python -m qai_hub_models.models.yolov7.export [--target-runtime ...] [--device ...] [--help]
Using Qualcomm® AI Hub Workbench, the export script will:
- Compile the model for the chosen device and target runtime (see: Compiling Models on AI Hub Workbench).
- If applicable, Quantize the model (see: Quantization on AI Hub Workbench)
- Profile the compiled model on a real device in the cloud (see: Profiling Models on AI Hub Workbench).
- Run inference with a sample input data on a real device in the cloud, and compare on-device model output with PyTorch output (see: Running Inference on AI Hub Workbench)
- Download the compiled model to disk.
End-To-End Model Demos
Most models in our directory contain CLI demos that run the model end-to-end:
pip install "qai_hub_models[yolov7]"
# Predict and draw bounding boxes on the provided image
python -m qai_hub_models.models.yolov7.demo [--image ...] [--eval-mode {fp,on-device}] [--help]
End-to-end demos:
- Preprocess human-readable input into model input
- Run model inference
- Postprocess model output to a human-readable format
Many end-to-end demos use AI Hub Workbench to run inference on a real cloud-hosted device (with --eval-mode on-device). All end-to-end demos can also run locally via PyTorch (with --eval-mode fp).
Sample Applications
Native applications that can run our models (with pre- and post-processing) on physical devices are published in the AI Hub Apps repository.
Python applications are defined for all models (from qai_hub_models.models.<model_name> import App). These apps wrap model inference with pre- and post-processing steps written using torch & numpy. These apps are optimized to be an easy-to-follow example, rather than to minimize prediction time.
Model Support Data
On-Device Runtimes
| Runtime | Supported OS |
|---|---|
| Qualcomm AI Engine Direct | Android, Linux, Windows |
| LiteRT (TensorFlow Lite) | Android, Linux |
| ONNX | Android, Linux, Windows |
Device Hardware & Precision
| Device Compute Unit | Supported Precision |
|---|---|
| CPU | FP32, INT16, INT8 |
| GPU | FP32, FP16 |
| NPU (includes Hexagon DSP, HTP) | FP16*, INT16, INT8 |
*Some older chipsets do not support fp16 inference on their NPU.
Chipsets
- Snapdragon 8 Elite, 8 Gen 3, 8 Gen 2, and 8 Gen 1 Mobile Platforms
- Snapdragon X Elite Compute Platform
- SA8255P, SA8295P, SA8650P, and SA8775P Automotive Platforms
- QCS 6490, QCS 8250, and QCS 8550 IoT Platforms
- QCS8450 XR Platform
and many more.
Devices
- Samsung Galaxy S21, S22, S23, and S24 Series
- Xiaomi 12 and 13
- Snapdragon X Elite CRD (Compute Reference Device)
- Qualcomm RB3 Gen 2, RB5
and many more.
Model Directory
Computer Vision
Multimodal
Audio
Generative AI
Need help?
Slack: https://aihub.qualcomm.com/community/slack
GitHub Issues: https://github.com/quic/ai-hub-models/issues
Email: ai-hub-support@qti.qualcomm.com.
LICENSE
Qualcomm® AI Hub Models is licensed under BSD-3. See the LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qai_hub_models-0.47.0-py3-none-any.whl.
File metadata
- Download URL: qai_hub_models-0.47.0-py3-none-any.whl
- Upload date:
- Size: 3.6 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ee15a13c5b6cd154da2182d33b6b96bc663a187726e88749ea1d71e63008d0d5
|
|
| MD5 |
aa2973520cd59127cdc91a93cfb49dae
|
|
| BLAKE2b-256 |
c5e841dab1820de45918a7d8c89e11c611a490daf98bf210544779625d2304c1
|