Skip to main content

Python API for TFLite inference with EdgeFirst extensions

Project description

edgefirst-tflite

Drop-in replacement Python API for TensorFlow Lite inference with EdgeFirst extensions for DMA-BUF zero-copy, NPU-accelerated camera preprocessing, and model metadata extraction.

The core inference API (Interpreter, get_input_details, get_output_details, set_tensor, invoke, get_output_tensor) is compatible with the standard tflite_runtime.interpreter.Interpreter, so existing TFLite Python code works with minimal changes. On top of the standard API, edgefirst-tflite exposes NXP i.MX platform extensions for DMA-BUF zero-copy and CameraAdaptor NPU preprocessing that require only a few extra lines of code.

Built on the edgefirst-tflite Rust crate with native performance via PyO3.

Installation

pip install edgefirst-tflite

Requires Python 3.9+ and NumPy 1.24+. The package ships as a native wheel with the TFLite runtime loaded dynamically at startup — no separate TFLite installation is needed as long as libtensorflowlite_c.so is available on the system library path.

To specify a custom library path:

interp = Interpreter(model_path="model.tflite", library_path="/usr/lib/libtensorflowlite_c.so")

Quick Start

import numpy as np
from edgefirst_tflite import Interpreter

# Load model and inspect tensors
interp = Interpreter(model_path="model.tflite", num_threads=4)
print(interp.get_input_details())
print(interp.get_output_details())

# Run inference
input_data = np.array([[1.0, 2.0, 3.0, 4.0]], dtype=np.float32)
interp.set_tensor(0, input_data)
interp.invoke()
output = interp.get_output_tensor(0)
print(output)

TFLite API Compatibility

The Interpreter class is designed as a drop-in replacement for tflite_runtime.interpreter.Interpreter. The core inference path is compatible:

Method Description
Interpreter(model_path=, model_content=, num_threads=, experimental_delegates=) Load a model
allocate_tensors() Re-allocate tensors (required after resize_tensor_input)
resize_tensor_input(input_index, tensor_size) Resize an input tensor
invoke() Run inference
get_input_details() / get_output_details() Tensor metadata dicts
get_input_tensor(index) / get_output_tensor(index) Copy tensor data to NumPy
set_tensor(input_index, value) Copy NumPy data into an input tensor
tensor(index) Zero-copy NumPy view (callable returning array)

Note on tensor indices: get_input_tensor, get_output_tensor, and set_tensor use 0-based indices relative to the input or output tensor lists. The "index" field returned by get_input_details() / get_output_details() matches these relative indices.

Hardware Acceleration with Delegates

Delegates provide hardware acceleration (e.g., NPU offload via VxDelegate on NXP i.MX platforms):

from edgefirst_tflite import Interpreter, load_delegate

delegate = load_delegate("libvx_delegate.so", options={
    "cache_file_path": "/tmp/vx_cache",
})

interp = Interpreter(
    model_path="model.tflite",
    experimental_delegates=[delegate],
)
interp.invoke()

EdgeFirst Extensions

DMA-BUF Zero-Copy Inference

DMA-BUF enables zero-copy data transfer between camera, CPU, and NPU by binding DMA-BUF file descriptors directly to TFLite tensors. This eliminates memory copies in the inference pipeline.

Import mode — register an externally-allocated DMA-BUF (e.g., from V4L2 camera capture):

from edgefirst_tflite import Interpreter, load_delegate

delegate = load_delegate("libvx_delegate.so")
interp = Interpreter(model_path="model.tflite", experimental_delegates=[delegate])

dmabuf = interp.dmabuf()
if dmabuf and dmabuf.is_supported():
    # Register a DMA-BUF fd from the camera driver
    handle = dmabuf.register(camera_fd, buffer_size, sync_mode="none")
    dmabuf.bind_to_tensor(handle, tensor_index=0)

    # Run inference — data flows camera → NPU with zero CPU copies
    interp.invoke()
    output = interp.get_output_tensor(0)

    # Cleanup
    dmabuf.unregister(handle)

Export mode — let the delegate allocate DMA-BUF buffers:

dmabuf = interp.dmabuf()
handle, desc = dmabuf.request(tensor_index=0, ownership="delegate")
print(f"Allocated buffer: fd={desc['fd']}, size={desc['size']}")

dmabuf.bind_to_tensor(handle, tensor_index=0)
interp.invoke()

dmabuf.release(handle)

Buffer cycling for multi-buffer pipelines (e.g., triple-buffering with V4L2):

handles = [dmabuf.register(fd, size) for fd, size in camera_buffers]

for frame in camera_stream:
    dmabuf.set_active(tensor_index=0, handle=handles[frame.index])
    interp.invoke()
    result = interp.get_output_tensor(0)

Cache synchronization for coherent CPU access:

dmabuf.begin_cpu_access(handle, mode="read")
# ... read tensor data on CPU ...
dmabuf.end_cpu_access(handle, mode="read")

dmabuf.sync_for_device(handle)  # Before NPU access
dmabuf.sync_for_cpu(handle)     # Before CPU access

CameraAdaptor — NPU-Accelerated Preprocessing

CameraAdaptor offloads camera format conversion (e.g., RGBA → RGB, YUV → RGB) to the NPU, eliminating CPU-side preprocessing. The conversion is injected directly into the TIM-VX inference graph.

delegate = load_delegate("libvx_delegate.so")

# Configure BEFORE building the interpreter — CameraAdaptor modifies
# the delegate's graph compilation
adaptor = delegate.camera_adaptor
if adaptor:
    # Simple format conversion: camera sends RGBA, model expects RGB
    adaptor.set_format(tensor_index=0, format="rgba")

Format conversion with resize and letterboxing:

adaptor.set_format_ex(
    tensor_index=0,
    format="rgba",
    width=1920,
    height=1080,
    letterbox=True,
    letterbox_color=0,
)

Explicit camera and model format specification:

adaptor.set_formats(
    tensor_index=0,
    camera_format="rgba",
    model_format="rgb",
)

Query format capabilities:

adaptor.is_supported("rgba")          # True
adaptor.input_channels("rgba")        # 4
adaptor.output_channels("rgba")       # 3
adaptor.fourcc("rgba")                # "RGBP" (V4L2 FourCC)
adaptor.from_fourcc("NV12")           # "nv12"

Model Metadata

Extract metadata embedded in TFLite model files:

interp = Interpreter(model_path="model.tflite")
meta = interp.get_metadata()
if meta:
    print(f"Model: {meta.name}")
    print(f"Version: {meta.version}")
    print(f"Author: {meta.author}")
    print(f"License: {meta.license}")
    print(f"Description: {meta.description}")

Zero-Copy Tensor Views

The tensor() method returns a callable that produces a NumPy array sharing memory with the TFLite C-allocated buffer:

# Get a zero-copy accessor for output tensor 1
# (index = input_count + output_offset)
accessor = interp.tensor(interp.input_count + 0)

interp.invoke()
view = accessor()  # Zero-copy NumPy view of the output
print(view)        # Reflects the latest inference results

interp.invoke()
view = accessor()  # Updated in-place — no copy needed

The accessor is invalidated by allocate_tensors() or resize_tensor_input(). Call tensor() again to get a fresh one.

YOLOv8 Example

A complete YOLOv8 detection and segmentation example is included at examples/yolov8/python/yolov8.py, demonstrating the full pipeline with edgefirst-tflite + edgefirst-hal:

# Detection on i.MX8MP with VxDelegate
python yolov8.py yolov8n-int8.tflite zidane.jpg \
    --delegate /usr/lib/libvx_delegate.so --warmup 3 --iters 10 --save

# Segmentation on i.MX95 with Neutron
python yolov8.py yolov8n-seg-int8.imx95.tflite zidane.jpg \
    --delegate /usr/lib/libneutron_delegate.so --warmup 3 --iters 10 --save

The example supports --warmup N and --iters N for benchmarking with min/max/avg/p95/p99 statistics. Image loading and preprocessing run once; only inference, decoding, and rendering are timed per iteration.

Performance

Benchmarked with YOLOv8n int8 models on zidane.jpg (1280x720). Both Rust and Python use the same underlying TFLite C API and NPU delegates — the Python overhead from PyO3 FFI is negligible.

i.MX 8M Plus (VxDelegate NPU)

Test Rust Python Overhead
Detection (infer avg) 69.9ms 69.6ms ~0%
Segmentation (infer avg) 84.2ms 83.8ms ~0%
Detection CPU-only 482.5ms 484.9ms ~0.5%

VxDelegate NPU speedup: ~7x over CPU. DMA-BUF zero-copy and CameraAdaptor RGBA→RGB conversion active.

i.MX 95 (Neutron NPU)

Test Rust Python Overhead
Detection (infer avg) 46.2ms 46.4ms ~0.4%
Segmentation (infer avg) 49.9ms 49.3ms ~0%
Detection CPU-only 266.6ms 266.1ms ~0%

Neutron NPU speedup: ~5.8x over CPU. No first-run compilation overhead (Neutron models are pre-compiled).

Key Observations

  • Python overhead is negligible — inference time is dominated by TFLite/NPU execution, not the Python↔Rust FFI boundary
  • Same detections — both Rust and Python produce identical results (2-3 objects: persons + tie in the reference image)
  • 10-iteration benchmarks with 3 warmup iterations, --save enabled (includes overlay rendering)

Error Handling

from edgefirst_tflite import (
    TfLiteError,           # Base exception
    LibraryError,          # TFLite library not found
    DelegateError,         # Delegate error status
    InvalidArgumentError,  # Bad arguments (index out of range, etc.)
)

try:
    interp = Interpreter(model_path="missing.tflite")
except InvalidArgumentError as e:
    print(f"Bad argument: {e}")
except LibraryError as e:
    print(f"Library not found: {e}")
except TfLiteError as e:
    print(f"TFLite error: {e}")

Platform Support

Platform Architecture Delegate DMA-BUF CameraAdaptor
i.MX 8M Plus aarch64 VxDelegate Yes Yes
i.MX 95 aarch64 Neutron No No
Linux x86_64 CPU only No No
macOS arm64 CPU only No No
Windows x86_64 CPU only No No

License

Apache-2.0. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

edgefirst_tflite-0.2.0.tar.gz (283.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

edgefirst_tflite-0.2.0-cp311-cp311-win_amd64.whl (398.1 kB view details)

Uploaded CPython 3.11Windows x86-64

edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (472.7 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (467.0 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ ARM64

edgefirst_tflite-0.2.0-cp311-cp311-macosx_11_0_arm64.whl (501.3 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

File details

Details for the file edgefirst_tflite-0.2.0.tar.gz.

File metadata

  • Download URL: edgefirst_tflite-0.2.0.tar.gz
  • Upload date:
  • Size: 283.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for edgefirst_tflite-0.2.0.tar.gz
Algorithm Hash digest
SHA256 c91c31143319de77cd625d551a35ebfeed9793424fc028c9f91fae7474a08973
MD5 4b498f5eb9e046e1b3b33b8d0ebb9aa3
BLAKE2b-256 42054e0a368b99e712d9776b4eb51b5b5c5bf2c08854d0f8622890d7cfd3acaa

See more details on using hashes here.

Provenance

The following attestation bundles were made for edgefirst_tflite-0.2.0.tar.gz:

Publisher: release.yml on EdgeFirstAI/tflite-rs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file edgefirst_tflite-0.2.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for edgefirst_tflite-0.2.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 b80d2c2de0752efeb2eb0687aba222dd53a14468f15b4148b3abc8eeffd6aa12
MD5 2184eca525b459b38f455ff3bfe928d2
BLAKE2b-256 fcd02b0470742692c0178f5526cfcd4af262d355e27034f5c9434afc840fcb15

See more details on using hashes here.

Provenance

The following attestation bundles were made for edgefirst_tflite-0.2.0-cp311-cp311-win_amd64.whl:

Publisher: release.yml on EdgeFirstAI/tflite-rs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 26d9cadaf58437008cff1cff51f118e55a5b0eabc633ef5f2fb6a466527fa505
MD5 2e925ae2e39844a9ee414ba8541a09a9
BLAKE2b-256 d312c23d4975f9111a93b9ded22faca6a74d66d75dd9255a6b4e2fe35a821e96

See more details on using hashes here.

Provenance

The following attestation bundles were made for edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on EdgeFirstAI/tflite-rs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 44c82739263d090bb5e725956698c4582a3cf9e597f1035920d328b6d376ad8d
MD5 ceaadb25af5530f0fa1586096bde76d0
BLAKE2b-256 27e528e8782d962b39cb9191e19d091b32949b9236d6fce552fb270244b22ba1

See more details on using hashes here.

Provenance

The following attestation bundles were made for edgefirst_tflite-0.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: release.yml on EdgeFirstAI/tflite-rs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file edgefirst_tflite-0.2.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for edgefirst_tflite-0.2.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2ac8f019edd8bd45f0e023e63979ad14f816a144b79b39f261ce3cd951c713dc
MD5 f53a311ed68a1803c5ca9a8d4124900c
BLAKE2b-256 7db07b26e13f9c49fabd538a91141be914d86a097d11a0342b71e3198af4b1a1

See more details on using hashes here.

Provenance

The following attestation bundles were made for edgefirst_tflite-0.2.0-cp311-cp311-macosx_11_0_arm64.whl:

Publisher: release.yml on EdgeFirstAI/tflite-rs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page