Skip to main content

Python package to make AI models deployment-ready for any hardware.

Project description

embedl-deploy

Python package to make AI models deployment-ready for any hardware.

Why embedl-deploy

PyTorch models are flexible, but edge hardware is not. Hardware toolchains may fail due to unsupported operators, apply implicit transformations and fusions during compilation and quantization leading to deployment issues.

embedl-deploy eliminates these surprises by enforcing hardware and compiler constraints directly in PyTorch, so what you build, train, and debug is what actually runs on the device. It converts your models to be compatible for the hardware target ensuring correct quantization and compilation.

Features

  • Hardware-accurate PyTorch Intermediate Representation (IR): Build models using a hardware-aware PyTorch intermediate representation that mirrors the behavior of the compiled artifact, e.g., fused convolutions. Unsupported operators and compatibility issues are surfaced early and resolved explicitly before compilation, within PyTorch.

  • Quantization: Supports post-training quantization (PTQ) and quantization-aware-training (QAT). Fake quantization is applied in PyTorch with explicit quantization operator placements. PTQ methods are included and QAT can be applied directly to the transformed and quantized models with no additional dependencies required.

  • Guaranteed deployable artifacts: Produce optimized compilation artifacts ready for deployment on the target device with predictable performance and accuracy.

Supported Backends

Backend Status
NVIDIA TensorRT Supported

Contact us for other backends.

Installation

pip install embedl-deploy

Note that you may need to also install onnx and onnx-simplifier to export and get the exported model compiled with TensorRT if using ONNX as an intermediate.


Quick Start

import torch
from embedl_deploy import transform
from embedl_deploy.quantize import quantize
from embedl_deploy.tensorrt import TENSORRT_PATTERNS
from torchvision.models import resnet18 as Model

# 1. Load a standard PyTorch model
model = Model().eval()
example_input = torch.randn(1, 3, 224, 224)

# 2. Transform — fuse and optimize for TensorRT in one call
res = transform(model, patterns=TENSORRT_PATTERNS)
print("Model\n", res.model.print_readable())
print("Matches", "\n".join([str(match) for match in res.matches]))


# 3. Quantize (PTQ)
def calibration_loop(model: torch.fx.GraphModule):
    model.eval()
    for _ in range(100):
        model(torch.randn(1, 3, 224, 224))


quantized_model = quantize(
    res.model, (example_input,), forward_loop=calibration_loop
)
quantized_model.eval()

# 4. Export as usual (dynamo exported models may have compilation issues)
torch.onnx.export(
    quantized_model, (example_input,), "model.onnx", dynamo=False
)

# 5. Quantization-aware training with a training loop
qat_model = quantized_model.train()
# Freeze BatchNorm, or apply other QAT utilities as needed
# train(qat_model)

# Compile
# -------
# Compilation can be done with TensorRT's trtexec tool, which can take the ONNX
# model and compile it for inference. The exported layer info and profile can
# be used for debugging, optimization and visualization.
#
# Note: that the ONNX model might need to be simplified with onnx-simplifier to
# make trtexec compile it. Dynamo exported models may have compilation issues,
# so it's recommended to export with dynamo=False.
#
# We are working on a Aten-based export path that should be more robust and
# support more models in the future.

# >> onnxsim model.onnx model.onnx
# >> trtexec \
#       --onnx=model.onnx \
#       --exportLayerInfo=layer_info.json \
#       --exportProfile=profile.json \
#       --profilingVerbosity=detailed

# More benchmarking scripts can be found in the examples/ directory

Design Principles

  1. Patterns are the only abstraction. Every graph transformation — fusion, conversion, quantization — is a Pattern subclass. Adding a new backend (TIDL, QNN, …) means defining a new set of Pattern subclasses and fused modules with quantization information. The core plan/apply machinery stays the same.

  2. Plans are editable. get_transformation_plan() returns a plan the user can inspect and edit before applying. Toggle match.apply = False to skip specific matches. transform() is a convenience for the common case where you want everything applied.

  3. FX-graph-based. All graph analysis and surgery uses torch.fx. Models are traced once and manipulated as fx.GraphModule objects. Support for Aten graphs produced by torch.export.export is planned for the future.

Support

License

Free for non-commercial use within the Embedl Community License (v.1.0).

Please Contact us for commercial licensing.

Copyright (C) 2026 Embedl AB

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

embedl_deploy-0.4.0.tar.gz (43.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

embedl_deploy-0.4.0-py3-none-any.whl (49.6 kB view details)

Uploaded Python 3

File details

Details for the file embedl_deploy-0.4.0.tar.gz.

File metadata

  • Download URL: embedl_deploy-0.4.0.tar.gz
  • Upload date:
  • Size: 43.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for embedl_deploy-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c9115eccfc22858f618888e9f6e0314f7b36e1b256dd56d3dbcb4f5c1ccd5bac
MD5 34ccf36fdbd0a17d76b4c4ffe06cabca
BLAKE2b-256 1b54d784c55fa5765a8b124667d42adcfd7c605fd23752142f1f4a9123411972

See more details on using hashes here.

File details

Details for the file embedl_deploy-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: embedl_deploy-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 49.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for embedl_deploy-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1a48f3889e2de21e3445f9ce07680490d93c5a3e280c4a3be3f40003cfe740b3
MD5 47548ea7c834ed628ee516601bc940a1
BLAKE2b-256 84055779f134bcc0627d3dbb287f1c6bd21c70c8cfaf9f8766f7498a0496687d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page