Production-ready computer vision utilities for multi-object tracking
Project description
🚀 acmenra-cv
Production-ready computer vision utilities for ADAS, multi-object tracking, and embedded vision systems
pip install acmenra-cv
📦 Overview
acmenra-cv is a high-performance, type-safe computer vision library engineered for real-time applications on resource-constrained embedded systems (Raspberry Pi 5, Jetson, etc.). Built following Clean Architecture principles, it provides three cohesive modules:
| Module | Purpose | Key Features |
|---|---|---|
YOLOUtils |
Spatial primitives for YOLO outputs | Normalized coordinates, strict validation, immutable transformations |
tracker |
Multi-object tracking with trajectories | Persistent IDs, configurable history, BoT-SORT integration |
render |
Type-safe visualization layer | Alpha-blended overlays, embedded optimizations, graceful degradation |
All components operate in normalized coordinate space [0.0, 1.0] by default, ensuring resolution independence across varying camera inputs and inference resolutions.
✨ Key Features
🔹 Unified Architecture
- Strict type & range validation (
[0.0, 1.0]withsafe()clamping factory) - Seamless YOLO integration (
boxes,masks.xyn,obb.xyxyxyxyn) - Immutable geometric transformations (
scale,translate,smooth) - Full type safety with IDE autocomplete and consistent API across all primitives
🔹 Embedded-Ready Performance
- Zero-crash OpenCV integration with
@validate_framedecorator - Global
show=Falsetoggle to bypass all rendering for headless/embedded deployments - Memory-efficient trajectory queues with O(1) average calculation
- Alpha-blended compositing (
cv2.addWeighted) and anti-aliased geometry (LINE_AA)
🔹 ADAS & Safety-Critical Design
- Trajectory history management for zone crossing and collision detection
- Temporal metadata (
TimedPoint) for velocity/direction estimation - Configurable thresholds (
conf,iou,max_length) for dynamic adaptation - Graceful degradation on invalid inputs — no exceptions, just safe fallbacks
🧩 Module Documentation
🔷 utils — Spatial Primitives
Validated geometric containers for YOLO detection outputs
Classes
Point — Validated 3D normalized coordinates
__init__(): Initializes with X, Y, Z. Validatesfloattype and[0.0, 1.0]range.X,Y,Z: Properties with strict type and range validation.get_distance(): Euclidean distance to another point (includes Z).scale(),translate(): Immutable transformations returning new instances.safe(): Class method factory with coordinate clamping — no exceptions.
Box — Axis-aligned 3D bounding box
__init__(): Six boundaries (left,right,top,bottom,front,back).center,bottom_center: Computed properties for tracking.width,height,depth: Dimension properties.get_area(),get_volume(): Geometric calculations.to_absolute_array(): Converts to pixel corners for OpenCV.
Polygon — Segmentation mask container
from_yolo_xyn(): Static factory from YOLOmasks.xyn.smooth(): Vertex smoothing via moving average.get_area(): Shoelace formula for normalized area.__getitem__(): Supports slicing — returns newPolygon.
Obb — Oriented (rotated) bounding box
from_yolo_obb(): Static factory from YOLOobb.xyxyxyxyn.angle: Cached rotation angle(-90° to +90°).width,height: Average edge lengths for rotated rectangles.
YOLOInstance — Unified detection container
- Combines
id,class_id,category,conf,box,polygon,obb. - Strict validation on all properties.
- Designed for safe pipeline integration.
🔷 tracker — Multi-Object Tracking
Persistent IDs, trajectory management, ADAS integration
Classes
Tracker — Main tracking engine
__init__(): Configurable with YOLO model, device, thresholds,max_length.track(): Main entry point — returnsList[TrackedObject]with persistent IDs.predict(),double_predict(): Detection modes (single / panoramic).- Properties:
model,device,conf,iou,max_length— all validated.
TrackedObject — Single tracked entity
id: Tracking identifier (Optional[int]).instance: LatestYOLOInstancedetection data.trajectory:TimedPointQueuewith historical positions.
TimedPointQueue — Fixed-length trajectory history
enqueue(),dequeue(): FIFO with auto-eviction.average_x/y/z: O(1) incremental centroid calculation.get_values(): Deep-copy snapshot for safe external access.
TimedPoint — Time-stamped spatial point
- Extends
Pointwithtimestamp: Optional[datetime]. to_point(): Discards temporal metadata for geometry-only ops.
🔷 render — Visualization Layer
Type-safe drawing operations for embedded systems
Classes
Drawer — Main rendering engine
draw_instances(): Renders multiple objects with alpha-blended overlays 🔥draw_box(),draw_obb(),draw_polygon(),draw_trajectory(): Individual element rendering.draw_text(): Absolute pixel coordinate text rendering.@validate_framedecorator on all public methods — zero-crash guarantee.
Style — Centralized visualization config
palette: Unique RGB tuples with[0, 255]validation.thickness,rounding,segment,smooth,alpha: All range-validated.show: Global toggle —Falsebypasses all rendering for performance.
Font — OpenCV typography config
color: 3-element RGB tuple validation.font: OpenCV font identifier[0, 7].font_scale,thickness: Positive integer validation.
💡 Quick Start
from acmenra_cv.model import Point, Box, Instance
from acmenra_cv.tracker import Tracker, TimedPointQueue
from acmenra_cv.render import Drawer, Style, Font
from ultralytics import YOLO
# 1. Initialize components
model = YOLO("yolov8n.pt")
style = Style(
palette=[(255, 0, 0), (0, 255, 0), (0, 0, 255)],
font=Font(color=(255, 255, 255)),
show=True
)
tracker = Tracker(model=model, category=YourEnum, max_length=50)
drawer = Drawer(style=style)
# 2. Process a frame
frame = ... # Your BGR frame (numpy array)
tracked_objects = tracker.track(frame, enable_tracking=True)
# 3. Render results
output = drawer.draw_instances(
frame=frame,
tracked_objects=tracked_objects,
is_box=True,
is_trajectory=True
)
# 4. Use spatial data for business logic
for obj in tracked_objects:
if obj.trajectory.count >= 5:
speed = estimate_speed(obj.trajectory) # Your logic
if obj.instance.category == YourEnum.car and speed > threshold:
trigger_alert(obj)
📋 Requirements
numpy>=1.21.0
opencv-python>=4.5.0
ultralytics>=8.0.0
Optional for development:
pytest>=7.0.0
black>=23.0.0
mypy>=1.0.0
🔐 License
© 2026 acmenra.studio. All rights reserved.
This software is proprietary and confidential. Unauthorized copying, distribution, or use is strictly prohibited.
For commercial licensing inquiries: contact@acmenra.studio
🌐 Links
- PyPI: https://pypi.org/project/acmenra-cv/
- Source: https://github.com/acmenra/acmenra-cv
- Documentation: https://github.com/acmenra/acmenra-cv#readme
- Issues: https://github.com/acmenra/acmenra-cv/issues
acmenra.studio — Building reliable vision systems for the edge.
Every millisecond and frame buffer counts. 🚀
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file acmenra_cv-0.1.5.5.tar.gz.
File metadata
- Download URL: acmenra_cv-0.1.5.5.tar.gz
- Upload date:
- Size: 47.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
34714e9a283c5f7a47740c5989563e16228e47c845f693b801e96ceacd27ddb0
|
|
| MD5 |
61d5d947618d656a520ef1ef456b0ed1
|
|
| BLAKE2b-256 |
6a28a8a2874d918c8e07e862f0f0e7d072a89b0a360ef2f31b233e26c4fc9d42
|
File details
Details for the file acmenra_cv-0.1.5.5-py3-none-any.whl.
File metadata
- Download URL: acmenra_cv-0.1.5.5-py3-none-any.whl
- Upload date:
- Size: 55.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e82b30c51e9a630ef5651ad18978f6371c07fc3722e68b1d861221be57a11ee
|
|
| MD5 |
96e4f750b9d5c8f92f4660b87ecd59f7
|
|
| BLAKE2b-256 |
a6fed0038922ca8e58a4d90515a6aff254c3f7b7e5d41df91a74e747d4724630
|