Skip to main content

Production YOLO inference and export library for edge and cloud deployment

Project description

yowo

Production YOLO inference and export — hardware-aware, multi-backend, edge-ready.

yowo implements native YOLO11 and YOLO26 architectures for inference and export, adding what production deployments need: automatic hardware detection, transparent backend selection, graceful degradation, and stream resilience.


Install

# Core (PyTorch backend, CPU inference)
pip install yowo

# ONNX Runtime — CPU inference (ARM, x86)
pip install yowo[onnx]

# ONNX Runtime — CUDA inference (NVIDIA GPU)
pip install yowo[onnx-gpu]

# OpenVINO — Intel CPU/iGPU
pip install yowo[openvino]

# Everything (ONNX GPU + OpenVINO)
pip install yowo[all]

# TensorRT — requires Linux + NVIDIA GPU (manual step)
pip install tensorrt>=10.0 --extra-index-url https://pypi.nvidia.com

Requirements: Python >=3.11, Linux (production) / macOS (development)


Quick Start

CLI

# Auto-detect hardware and run inference
yowo detect image.jpg

# Use a specific model
yowo detect video.mp4 --model yolo26n

# Use a local weights file (skips download)
yowo detect image.jpg --model yolo26n --weights /path/to/YOLO26.pt

# Auto-tune config for your device and source type
yowo detect video.mp4 --model yolo26n --preset

# RTSP stream
yowo detect rtsp://camera-ip:554/stream --model yolo26n --confidence 0.4

# Save detections to JSON
yowo detect ./images/ --model yolo11s --output detections.json

# Show hardware and installed backends
yowo info

# List all registered model variants
yowo models

Python API

from yowo import InferenceEngine, open_source

# Minimal: auto-select everything (defaults to YOLO26 Nano)
with InferenceEngine(confidence_threshold=0.35) as engine:
    for detection in engine.stream(open_source("image.jpg")):
        for box in detection.boxes:
            print(f"{box.class_name}: {box.confidence:.2f} @ {box.as_xyxy()}")

Example

yowo detect "input.jpg" \
  --model yolo26n \
  --weights "yolo26n.pt" \
  --backend pytorch \
  --confidence 0.25 \
  --output detections.json

The full JSON output per detection:

{
  "frame_index": 0,
  "source_id": "input.jpg",
  "inference_time_ms": 582.2,
  "backend": "pytorch",
  "model": "yolo26n",
  "boxes": [
    {
      "x1": 387.0, "y1": 422.0, "x2": 622.0, "y2": 537.0,
      "confidence": 0.888,
      "class_id": 2,
      "class_name": "car"
    },
    ....
  ]
}

Models

Name Alias Notes
yolo11n/s/m/l/x YOLO11 Stable, best production baseline
yolo26n/s/m/l/x YOLO26 NMS-free, best CPU and INT8 speed

Weights are downloaded automatically to ~/.cache/yowo/weights/ on first use.


Backends

yowo selects the best available backend automatically. You can override.

Backend Format When used
TensorRT .engine NVIDIA GPU + TensorRT installed
ONNX Runtime (CUDA) .onnx NVIDIA GPU + onnxruntime-gpu
ONNX Runtime (CoreML) .onnx macOS + Apple Silicon (auto-detected)
OpenVINO _openvino_model/ Intel CPU/iGPU + openvino
ONNX Runtime (CPU) .onnx Any CPU + onnxruntime
PyTorch .pt Universal fallback

Priority chain: TensorRT → ONNX (CUDA) → CoreML → OpenVINO → ONNX (CPU) → PyTorch

Apple Silicon: CoreML EP is auto-detected and offloads inference to the Neural Engine — 4-5x faster than PyTorch CPU. No configuration needed.

If a backend fails to load, yowo falls back to the next in chain and logs a warning — it never crashes.


Detect

Single image

from yowo import InferenceEngine, ModelFamily, ModelSize, open_source

with InferenceEngine(
    model_family=ModelFamily.YOLO11,
    model_size=ModelSize.SMALL,
    confidence_threshold=0.3,
) as engine:
    src = open_source("photo.jpg")
    for detection in engine.stream(src):
        print(f"{detection.num_boxes} objects in {detection.inference_time_ms:.1f}ms")
        for box in detection.boxes:
            print(f"  {box.class_name}: {box.confidence:.2f}")

Video file

# prefetch=True (default) overlaps decode and inference for offline sources
with InferenceEngine(prefetch=True, batch_size=4) as engine:
    src = open_source("recording.mp4")
    for detection in engine.stream(src):
        # detection.frame.frame_index is the video frame number
        pass

RTSP stream (auto-reconnect, live pipeline)

from yowo.types import FrameDropPolicy

# Live source → _stream_live path is selected automatically
# ThreadedFrameReader decouples network I/O from inference
with InferenceEngine(
    frame_drop_policy=FrameDropPolicy.LATEST,  # always-current frame
    max_queue_size=2,
) as engine:
    src = open_source("rtsp://192.168.1.10:554/live")
    for detection in engine.stream(src):
        # Reconnects automatically on disconnect
        print(f"lag: {detection.frame.frame_index}")

FrameDropPolicy controls what happens when inference is slower than frame delivery:

Policy Behaviour Use case
NONE Block until queue has space Offline analysis — no frame skipping
LATEST Evict oldest, insert newest Real-time display — always-current view
SKIP_OLDEST Pop back of queue Ordered processing with bounded latency

Batch of frames

from yowo import InferenceEngine
from yowo.types import Frame
import cv2

engine = InferenceEngine(batch_size=8)
engine.load()

frames = [
    Frame(pixels=cv2.imread(f"frame_{i:04d}.jpg"), source_id="batch", frame_index=i)
    for i in range(8)
]
detections = engine.detect(frames)
engine.close()

Free-threaded Python (GIL=OFF)

On Python 3.13+ with the free-threaded build (python3.13t), inference workers run truly in parallel. pipeline_workers is auto-detected:

from yowo.types import is_free_threaded

print(is_free_threaded())  # True on python3.13t

# pipeline_workers=2 is set automatically on GIL=OFF builds
with InferenceEngine(prefetch=True) as engine:
    for detection in engine.stream(open_source("video.mp4")):
        ...
# ~1.5x throughput vs GIL Python on CPU inference (YOLO26n: 39 → 58 FPS)

Using InferenceConfig

from yowo import InferenceConfig, InferenceEngine

config = InferenceConfig(
    model_family=ModelFamily.YOLO26,
    model_size=ModelSize.NANO,
    confidence_threshold=0.35,
    batch_size=4,
)
with InferenceEngine(config) as engine:
    ...

Preset config (auto-tuning)

Auto-select pipeline knobs (batch size, caching, prefetch, frame drop policy) based on detected hardware and source type:

from yowo import classify_source, preset_config
from yowo.hardware import get_hardware_profile

hw = get_hardware_profile()
source_cat = classify_source("rtsp://192.168.1.10/stream")
config = preset_config(hw, source_cat)

with InferenceEngine(config) as engine:
    ...

CLI equivalent — --preset auto-tunes, explicit flags override preset values:

yowo detect video.mp4 --preset                    # fully automatic
yowo detect video.mp4 --preset --batch 8           # override batch size
yowo detect rtsp://cam/stream --preset --confidence 0.4

Override backend and precision

from yowo import BackendType, Precision

with InferenceEngine(backend=BackendType.ONNX, precision=Precision.FP16) as engine:
    ...

Feature map cache (sequential video inference)

Skip backbone + neck on similar consecutive frames — 60–85% compute savings for slow-moving scenes.

# In-memory cache (default)
with InferenceEngine(cache=True) as engine:
    for detection in engine.stream(open_source("video.mp4")):
        ...

# mmap-backed cache (OS manages memory pressure)
from pathlib import Path
with InferenceEngine(cache_dir=Path("/tmp/yowo-cache")) as engine:
    for detection in engine.stream(open_source("rtsp://camera/stream")):
        ...

KV cache (attention state across frames)

Reuse Attention K,V tensors and skip C2PSA/C3k2PSA blocks on similar frames. Best for PyTorch CPU/MPS; no benefit on ONNX runtimes.

with InferenceEngine(kv_cache=True) as engine:
    for detection in engine.stream(open_source("video.mp4")):
        ...

Export a KV-cache-enabled ONNX model (K,V as explicit I/O for stateless runtimes):

from pathlib import Path
from yowo import export_model, ExportFormat, ModelSpec, ModelFamily, ModelSize, Precision

spec = ModelSpec(ModelFamily.YOLO26, ModelSize.NANO)
meta = export_model(
    spec, ExportFormat.ONNX, output_dir=Path("./exported/"),
    kv_cache=True,
)

Export

Export .pt weights to an optimized format for your target hardware.

CLI

# Export to ONNX (FP16) — downloads weights automatically
yowo export yolo11n --format onnx --precision fp16

# Export using a local weights file (skips download)
yowo export yolo26n --weights /path/to/YOLO26.pt --format onnx --precision fp32

# Export to TensorRT engine (FP16)
yowo export yolo26s --format tensorrt --precision fp16 --output-dir ./engines/

# Export to ONNX with INT8 quantization (requires calibration images)
yowo export yolo11m --format onnx --precision int8 --calibration-data ./cal_images/

# Export with dynamic batch support
yowo export yolo11n --format onnx --dynamic-batch --imgsz 1280

Python API

from yowo import export_model, ModelSpec, ModelFamily, ModelSize, ExportFormat, Precision
from pathlib import Path

meta = export_model(
    ModelSpec(ModelFamily.YOLO26, ModelSize.NANO),
    ExportFormat.ONNX,
    output_dir=Path("./exported/"),
    precision=Precision.FP16,
)

print(meta.file_path)          # Path to exported model file
print(meta.file_size_bytes)    # Size in bytes
print(meta.export_duration_sec)  # How long it took

Each export produces a .yowo.json sidecar file recording the model family, precision, export date, and hardware used.

INT8 quantization

INT8 requires a calibration dataset of at least 300 representative images.

yowo export yolo26n --format tensorrt --precision int8 \
    --calibration-data /datasets/coco_val/images/
meta = export_model(
    spec, ExportFormat.TENSORRT, Path("./engines/"),
    precision=Precision.INT8,
    calibration_data="/datasets/coco_val/images/",
)

Hardware Info

yowo info

Output example:

=== Hardware ===
CPU: Device(type=cpu, name=AMD EPYC 7763, cpu_arch=x86_64)
GPU 0: Device(type=cuda, index=0, name=NVIDIA A100, arch=ampere)
CPU features: avx2

=== Libraries ===
torch:        2.3.0+cu121
cuda:         12.1
tensorrt:     10.0.1
onnxruntime:  1.18.0 (CUDA)
openvino:     not installed

Configuration

Via Python

from yowo import InferenceConfig, InferenceEngine

# Option A: Pass config object
config = InferenceConfig(
    confidence_threshold=0.35,
    iou_threshold=0.5,
    batch_size=4,
)
with InferenceEngine(config) as engine:
    ...

# Option B: Pass kwargs directly
with InferenceEngine(confidence_threshold=0.35, batch_size=4) as engine:
    ...

Via YAML file

# yowo.yaml
confidence_threshold: 0.35
iou_threshold: 0.50
batch_size: 4
from yowo import load_config, InferenceEngine

config = load_config("yowo.yaml")
with InferenceEngine(config) as engine:
    ...

Via environment variables

export YOWO_CONFIDENCE=0.35
export YOWO_BATCH_SIZE=4
export YOWO_IOU=0.5

Precedence: environment variables > YAML file > defaults.


Error Handling

All exceptions inherit from yowo.YowoError.

from yowo import (
    YowoError,
    DependencyError,   # SDK not installed
    BackendLoadError,  # Model file corrupt / wrong format
    InferenceError,    # Runtime inference failure
    SourceError,       # Input stream unreachable
    ConfigError,       # Invalid configuration values
)

try:
    with InferenceEngine() as engine:
        ...
except DependencyError as e:
    print(f"Missing package: {e.package}")
    print(f"Install with: {e.install_cmd}")
except BackendLoadError as e:
    print(f"Backend failed: {e}")
    # Engine already tried all fallback backends before raising
except YowoError as e:
    print(f"yowo error: {e}")

Platform Notes

Platform Backend Notes
NVIDIA GPU (server) TensorRT or ONNX (CUDA) Install yowo[onnx-gpu]; TensorRT is manual
NVIDIA Jetson TensorRT JetPack >= 5.0; CUDA and TensorRT pre-installed
Apple Silicon (M1–M4) ONNX (CoreML) Install yowo[onnx]; auto-detects Neural Engine, 4-5x vs CPU
Apple Silicon (MPS) PyTorch MPS GPU via --device mps; 1.3x vs ultralytics
Intel CPU/iGPU OpenVINO Install yowo[openvino]
x86 CPU (Linux) ONNX Install yowo[onnx]; AVX2 gives ~2x speedup
ARM CPU (Raspberry Pi, Graviton) ONNX Install yowo[onnx]

Architecture

Module Path Responsibility
core src/yowo/ InferenceEngine, public API surface, engine.py, config.py, types.py, errors.py
arch src/yowo/arch/ Native YOLO11 and YOLO26 PyTorch — backbone, FPN-PAN neck, detection head, scaling, weight loading
backends src/yowo/backends/ Inference backend implementations (TensorRT, ONNX, OpenVINO, PyTorch) and automatic priority-chain selection
cli src/yowo/cli/ Click-based CLI — detect, export, info, models commands
export src/yowo/export/ Export .pt weights to ONNX / TensorRT / OpenVINO with calibration, metadata sidecar, and output validation
hardware src/yowo/hardware/ One-time hardware detection (GPU, CPU arch, installed libs), cached for session lifetime
io src/yowo/io/ Frame sources (image, video, RTSP, directory), batch preprocessing, output sinks
models src/yowo/models/ Model family / size registry, weight download, and ~/.cache/yowo/weights/ cache management
postprocess src/yowo/postprocess/ Decode raw backend tensors into Detection objects; NMS for backends that return raw proposals

Development

# Clone and install with dev deps
git clone https://github.com/your-org/yowo
cd yowo
uv sync --group dev

# Quality gates (run before every commit)
uv run ruff check src/ tests/
uv run pyright src/yowo/
uv run pytest tests/unit/ --cov=yowo --cov-report=term-missing

# CLI from source
uv run yowo info

Architecture and module contracts are documented in:

Experiments

Report Summary
Vehicle Detection Benchmark — YOLO11s vs YOLO26m PyTorch FP32 vs ONNX FP32/FP16/INT8 on Apple M4 Pro. YOLO11s ONNX FP16 achieves 18.1 FPS (2.62x PyTorch). YOLO26m ONNX FP32 achieves 6.9 FPS.
Native Architecture Inference Optimization — all 10 variants DFL buffer, in-place sigmoid, stride flag, anchor cache applied to arch/. YOLO26 family 10-17% faster than ultralytics baseline; YOLO11 family 1-4% faster. Box IoU vs ultralytics: 0.967-0.995. 9/10 variants faster, avg 1.07x.
ONNX + CoreML EP + MPS Optimization CoreML EP auto-detection for Apple Neural Engine: 4.36x avg faster than PyTorch across all 10 variants (nano 140-188 FPS, XL 27-29 FPS). MPS (Metal GPU): 1.32x avg faster than ultralytics. KV cache analysis: +12% on CPU PyTorch (block cache), negligible on GPU/CoreML.
Phase 3 Source-Aware Pipeline Source-aware dispatch (_stream_live / _stream_pipeline), ThreadedFrameReader, FrameDropPolicy, pre-allocated I/O buffers. CoreML 3.5–3.75× faster than PyTorch on offline video. Free-threaded Python 3.13t: pipeline_workers=2 auto-selected → 58.4 FPS vs 39.3 FPS (1.49×) on YOLO26n.

License

Apache-2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yowo-1.3.1.tar.gz (310.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yowo-1.3.1-py3-none-any.whl (156.3 kB view details)

Uploaded Python 3

File details

Details for the file yowo-1.3.1.tar.gz.

File metadata

  • Download URL: yowo-1.3.1.tar.gz
  • Upload date:
  • Size: 310.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.3

File hashes

Hashes for yowo-1.3.1.tar.gz
Algorithm Hash digest
SHA256 42d674cea84e636855a21554e24819672e51c553a262d704990596c8208e5668
MD5 af69322ae304816c0cc77a7052dc0515
BLAKE2b-256 e5efd9710eeb649d95d1853408f0de2f855e45335b0e67b3c45b5877bca91085

See more details on using hashes here.

File details

Details for the file yowo-1.3.1-py3-none-any.whl.

File metadata

  • Download URL: yowo-1.3.1-py3-none-any.whl
  • Upload date:
  • Size: 156.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.3

File hashes

Hashes for yowo-1.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 040e08a599648086c2ce43ee271031f405969280626e21eb17cfa39ad7164504
MD5 929c68dcf8a068f0dd574402ffafcc1c
BLAKE2b-256 9d162b54dd7e90f713a7d788e2e9f52d44c9588cfa9acb7453a73f06ff1af03c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page