Skip to main content

No project description provided

Project description

AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose

👋 Hello

==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==

AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack, DeepSORT or NorFair can be integrated with different versions of YOLO with minimum lines of code. This python wrapper provides YOLO models in ONNX, PyTorch & CoreML flavors. We plan to offer support for future versions of YOLO when they get released.

This is One Library for most of your computer vision needs.

If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects

Watch the step-by-step tutorial 🤝

💻 Install

🔥 Prerequisites
pip install asone
👉 Install from Source

💾 Clone the Repository

Navigate to an empty folder of your choice.

git clone https://github.com/augmentedstartups/AS-One.git

Change Directory to AS-One

cd AS-One

👉 For Linux
python3 -m venv .env
source .env/bin/activate

pip install -r requirements.txt

# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
👉 For Windows 10/11
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox

pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision

# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
👉 For MacOS
python3 -m venv .env
source .env/bin/activate


pip install -r requirements.txt

# for CPU
pip install torch torchvision

Quick Start 🏃‍♂️

Use tracker on sample video.

import asone
from asone import ASOne

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw(model_output, display=False)

Run in Google Colab 💻

Open In Colab

Sample Code Snippets 📃

6.1 👉 Object Detection
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/test.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True)

Run the asone/demo_detector.py to test detector.

# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.1.1 👉 Use Custom Trained Weights for Detector

Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.

import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/license_video.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
6.1.2 👉 Changing Detector Models

Change detector by simply changing detector flag. The flags are provided in benchmark tables.

  • Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
# Change detector
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

# For macOs
# YOLO5
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
model = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2 👉 Object Tracking

Use tracker on sample video.

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)
    # Do anything with bboxes here

[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne class.

6.2.1 👉 Changing Detector and Tracking Models

Change Tracker by simply changing the tracker flag.

The flags are provided in benchmark tables.

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
# Change tracker
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
# Change Detector
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

Run the asone/demo_tracker.py to test detector.

# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
6.3 👉 Segmentation
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw_masks(model_output, display=True) # Draw masks
6.4 👉 Text Detection

Sample code to detect text on an image

# Detect and recognize text
import asone
from asone import ASOne, utils
import cv2

model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)

Use Tracker on Text

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)

    # Do anything with bboxes here

Run the asone/demo_ocr.py to test ocr.

# run on gpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4

# run on cpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
6.5 👉 Pose Estimation

Sample code to estimate pose on an image

# Pose Estimation
import asone
from asone import PoseEstimator, utils
import cv2

model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
  • Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in benchmark tables.
# Pose Estimation on video
import asone
from asone import PoseEstimator, utils

model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
    annotations = utils.draw_kpts(model_output)
    # Do anything with kpts here

Run the asone/demo_pose_estimator.py to test Pose estimation.

# run on gpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4

# run on cpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu

To setup ASOne using Docker follow instructions given in docker setup🐳

ToDo 📝

  • First Release
  • Import trained models
  • Simplify code even further
  • Updated for YOLOv8
  • OCR and Counting
  • OCSORT, StrongSORT, MoTPy
  • M1/2 Apple Silicon Compatibility
  • Pose Estimation YOLOv7/v8
  • YOLO-NAS
  • Updated for YOLOv8.1
  • YOLOV9
  • SAM Integration
Offered By 💼 : Maintained By 👨‍💻 :
AugmentedStarups AxcelerateAI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

asone-2.0.0-py3-none-any.whl (761.7 kB view details)

Uploaded Python 3

asone-2.0.0-1-py3-none-any.whl (762.2 kB view details)

Uploaded Python 3

File details

Details for the file asone-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: asone-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 761.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.31.0 setuptools/45.2.0 requests-toolbelt/1.0.0 tqdm/4.66.2 CPython/3.8.10

File hashes

Hashes for asone-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b77438460b554429ccd184d64db5ba8665c7591faa29a7ec77fb7a88638631bf
MD5 d07ebc527a037a01c8b6460094a9f230
BLAKE2b-256 1492d47d45108bef6b8c0c515fd2cbc534638f093ca31c0915fb9c1f17525a59

See more details on using hashes here.

File details

Details for the file asone-2.0.0-1-py3-none-any.whl.

File metadata

  • Download URL: asone-2.0.0-1-py3-none-any.whl
  • Upload date:
  • Size: 762.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.31.0 setuptools/45.2.0 requests-toolbelt/1.0.0 tqdm/4.66.2 CPython/3.8.10

File hashes

Hashes for asone-2.0.0-1-py3-none-any.whl
Algorithm Hash digest
SHA256 b5bd8c9a9f1449e52babb64c68bc6bbc1f4c3030eb6c03e9963eff1645bc7a34
MD5 f16e496f36d69a49e4048d5b0c7bac65
BLAKE2b-256 386cb406fb83a9378589b8d935fea1b2f735246487adef1d840375a5a5b5979a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page