The official Embedl Hub Python client library.
Project description
Embedl Hub Python library
Optimize and deploy your model on any edge device with the Embedl Hub Python library:
- Quantize your model for lower latency and memory usage.
- Compile your model for execution on CPU, GPU, NPU or other AI accelerators on your target devices.
- Benchmark your model's latency and memory usage on real edge devices in the cloud.
The library logs your metrics, parameters, and benchmarks on the Embedl Hub website, allowing you to inspect, compare, and reproduce your results.
Create a free Embedl Hub account
to get started with the embedl-hub library.
Installation
The simplest way to install embedl-hub is through pip:
pip install embedl-hub
Quickstart
We recommend using our end-to-end workflow CLI to quickly get started building your edge AI application:
Usage: embedl-hub [OPTIONS] COMMAND [ARGS]...
embedl-hub end-to-end Edge-AI workflow CLI
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --version -V Print embedl-hub version and exit. │
│ --verbose -v INTEGER Increase verbosity (-v, -vv, -vvv). │
│ --install-completion Install completion for the current shell. │
│ --show-completion Show completion for the current shell, to copy it or customize the installation. │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ auth Store the API key for embedl-hub CLI. │
│ init Configure persistent CLI context. │
│ show Print active project/experiment IDs and names. │
│ compile Compile a model into a device ready binary using Qualcomm AI Hub. │
│ Qualcomm AI Hub may return a zip file containing multiple files. │
│ quantize Quantize an ONNX model using Qualcomm AI Hub. │
│ Qualcomm AI Hub may return a zip file containing multiple files. │
│ benchmark Benchmark compiled model on device and measure it's performance. │
│ list-devices List all available target devices. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
License
Copyright (C) 2025 Embedl AB
This software is subject to the Embedl Hub Software License Agreement.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file embedl_hub-2025.12.0.dev0.tar.gz.
File metadata
- Download URL: embedl_hub-2025.12.0.dev0.tar.gz
- Upload date:
- Size: 80.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0869c8f9a104d830d43aa1ff6d42ae482a2fc7b592e5dcf630e813745030eb47
|
|
| MD5 |
421cec99962e3aace3dc96455c538c4f
|
|
| BLAKE2b-256 |
66de50206f17adc8f9006cc02b4978649117eca0f937ed0fea8da1a5b61554c9
|
File details
Details for the file embedl_hub-2025.12.0.dev0-py3-none-any.whl.
File metadata
- Download URL: embedl_hub-2025.12.0.dev0-py3-none-any.whl
- Upload date:
- Size: 78.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dae7fa41c7fcc0077edb30b41e6be8fd759e18d6df9d5e44a42b1bca241d31df
|
|
| MD5 |
3c13de083ba5c02a76708cfde6608bf4
|
|
| BLAKE2b-256 |
11d7ab961ea503fda1f7f6eabaf456180bb2b4c13d82067ff6168bd6f40751b9
|