Skip to main content

Production-ready computer vision utilities for ADAS and multi-object tracking

Project description

🚀 acmenra-cv

Production-ready computer vision utilities for ADAS, multi-object tracking, and embedded vision systems

PyPI Python License

pip install acmenra-cv

📦 Overview

acmenra-cv is a high-performance, type-safe computer vision library engineered for real-time applications on resource-constrained embedded systems (Raspberry Pi 5, Jetson, etc.). Built following Clean Architecture principles, it provides three cohesive modules:

Module Purpose Key Features
YOLOUtils Spatial primitives for YOLO outputs Normalized coordinates, strict validation, immutable transformations
tracker Multi-object tracking with trajectories Persistent IDs, configurable history, BoT-SORT integration
render Type-safe visualization layer Alpha-blended overlays, embedded optimizations, graceful degradation

All components operate in normalized coordinate space [0.0, 1.0] by default, ensuring resolution independence across varying camera inputs and inference resolutions.


✨ Key Features

🔹 Unified Architecture

  • Strict type & range validation ([0.0, 1.0] with safe() clamping factory)
  • Seamless YOLO integration (boxes, masks.xyn, obb.xyxyxyxyn)
  • Immutable geometric transformations (scale, translate, smooth)
  • Full type safety with IDE autocomplete and consistent API across all primitives

🔹 Embedded-Ready Performance

  • Zero-crash OpenCV integration with @validate_frame decorator
  • Global show=False toggle to bypass all rendering for headless/embedded deployments
  • Memory-efficient trajectory queues with O(1) average calculation
  • Alpha-blended compositing (cv2.addWeighted) and anti-aliased geometry (LINE_AA)

🔹 ADAS & Safety-Critical Design

  • Trajectory history management for zone crossing and collision detection
  • Temporal metadata (TimedPoint) for velocity/direction estimation
  • Configurable thresholds (conf, iou, max_length) for dynamic adaptation
  • Graceful degradation on invalid inputs — no exceptions, just safe fallbacks

🧩 Module Documentation

🔷 utils — Spatial Primitives

Validated geometric containers for YOLO detection outputs

Classes

Point — Validated 3D normalized coordinates

  • __init__(): Initializes with X, Y, Z. Validates float type and [0.0, 1.0] range.
  • X, Y, Z: Properties with strict type and range validation.
  • get_distance(): Euclidean distance to another point (includes Z).
  • scale(), translate(): Immutable transformations returning new instances.
  • safe(): Class method factory with coordinate clamping — no exceptions.

Box — Axis-aligned 3D bounding box

  • __init__(): Six boundaries (left, right, top, bottom, front, back).
  • center, bottom_center: Computed properties for tracking.
  • width, height, depth: Dimension properties.
  • get_area(), get_volume(): Geometric calculations.
  • to_absolute_array(): Converts to pixel corners for OpenCV.

Polygon — Segmentation mask container

  • from_yolo_xyn(): Static factory from YOLO masks.xyn.
  • smooth(): Vertex smoothing via moving average.
  • get_area(): Shoelace formula for normalized area.
  • __getitem__(): Supports slicing — returns new Polygon.

Obb — Oriented (rotated) bounding box

  • from_yolo_obb(): Static factory from YOLO obb.xyxyxyxyn.
  • angle: Cached rotation angle (-90° to +90°).
  • width, height: Average edge lengths for rotated rectangles.

YOLOInstance — Unified detection container

  • Combines id, class_id, category, conf, box, polygon, obb.
  • Strict validation on all properties.
  • Designed for safe pipeline integration.

🔷 tracker — Multi-Object Tracking

Persistent IDs, trajectory management, ADAS integration

Classes

Tracker — Main tracking engine

  • __init__(): Configurable with YOLO model, device, thresholds, max_length.
  • track(): Main entry point — returns List[TrackedObject] with persistent IDs.
  • predict(), double_predict(): Detection modes (single / panoramic).
  • Properties: model, device, conf, iou, max_length — all validated.

TrackedObject — Single tracked entity

  • id: Tracking identifier (Optional[int]).
  • instance: Latest YOLOInstance detection data.
  • trajectory: TimedPointQueue with historical positions.

TimedPointQueue — Fixed-length trajectory history

  • enqueue(), dequeue(): FIFO with auto-eviction.
  • average_x/y/z: O(1) incremental centroid calculation.
  • get_values(): Deep-copy snapshot for safe external access.

TimedPoint — Time-stamped spatial point

  • Extends Point with timestamp: Optional[datetime].
  • to_point(): Discards temporal metadata for geometry-only ops.

🔷 render — Visualization Layer

Type-safe drawing operations for embedded systems

Classes

Drawer — Main rendering engine

  • draw_instances(): Renders multiple objects with alpha-blended overlays 🔥
  • draw_box(), draw_obb(), draw_polygon(), draw_trajectory(): Individual element rendering.
  • draw_text(): Absolute pixel coordinate text rendering.
  • @validate_frame decorator on all public methods — zero-crash guarantee.

Style — Centralized visualization config

  • palette: Unique RGB tuples with [0, 255] validation.
  • thickness, rounding, segment, smooth, alpha: All range-validated.
  • show: Global toggle — False bypasses all rendering for performance.

Font — OpenCV typography config

  • color: 3-element RGB tuple validation.
  • font: OpenCV font identifier [0, 7].
  • font_scale, thickness: Positive integer validation.

💡 Quick Start

from acmenra_cv.utils import Point, Box, YOLOInstance
from acmenra_cv.tracker import Tracker, TimedPointQueue
from acmenra_cv.render import Drawer, Style, Font
from ultralytics import YOLO

# 1. Initialize components
model = YOLO("yolov8n.pt")
style = Style(
    palette=[(255, 0, 0), (0, 255, 0), (0, 0, 255)],
    font=Font(color=(255, 255, 255)),
    show=True
)
tracker = Tracker(model=model, category=YourEnum, max_length=50)
drawer = Drawer(style=style)

# 2. Process a frame
frame = ...  # Your BGR frame (numpy array)
tracked_objects = tracker.track(frame, enable_tracking=True)

# 3. Render results
output = drawer.draw_instances(
    frame=frame,
    tracked_objects=tracked_objects,
    is_box=True,
    is_trajectory=True
)

# 4. Use spatial data for business logic
for obj in tracked_objects:
    if obj.trajectory.count >= 5:
        speed = estimate_speed(obj.trajectory)  # Your logic
        if obj.instance.category == YourEnum.car and speed > threshold:
            trigger_alert(obj)

📋 Requirements

numpy>=1.21.0
opencv-python>=4.5.0
ultralytics>=8.0.0

Optional for development:

pytest>=7.0.0
black>=23.0.0
mypy>=1.0.0

🔐 License

© 2026 acmenra.studio. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, distribution, or use is strictly prohibited.

For commercial licensing inquiries: contact@acmenra.studio


🌐 Links


acmenra.studio — Building reliable vision systems for the edge.
Every millisecond and frame buffer counts. 🚀

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

acmenra_cv-0.1.4.tar.gz (46.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

acmenra_cv-0.1.4-py3-none-any.whl (53.7 kB view details)

Uploaded Python 3

File details

Details for the file acmenra_cv-0.1.4.tar.gz.

File metadata

  • Download URL: acmenra_cv-0.1.4.tar.gz
  • Upload date:
  • Size: 46.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for acmenra_cv-0.1.4.tar.gz
Algorithm Hash digest
SHA256 c5448b4f684d3409759aba5e10cf8829d873c8205ecd727083e4084d06683226
MD5 4f7105a5016bb58f20d154837257ab85
BLAKE2b-256 e06dbd8527ddeefd3c312cf6474493af9b1e0c1a1f4a5dc426704b3607df27f5

See more details on using hashes here.

File details

Details for the file acmenra_cv-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: acmenra_cv-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 53.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for acmenra_cv-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 cd3f349373ab408a08d0e190c2e0b6e21fda804fe0ab423885b72021ba389515
MD5 3c2a8f8daeedaa44a2aad90a42819df9
BLAKE2b-256 9360987a47dc551924f8c7cfe1a504fa5585c1d956d5f8196ffb993b11381a58

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page