A vision library for performing sliced inference on large images/small objects
Project description
SAHI: Slicing Aided Hyper Inference
A lightweight vision library for performing large scale object detection & instance segmentation
Overview
Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities.
Command | Description |
---|---|
predict | perform sliced/standard prediction using any yolov5/mmdet model |
predict-fiftyone | perform sliced/standard prediction using any yolov5/mmdet model and explore results in fiftyone app |
coco slice | automatically slice COCO annotation and image files |
coco fiftyone | explore multiple prediction results on your COCO dataset with fiftyone ui ordered by number of misdetections |
coco evaluate | evaluate classwise COCO AP and AR for given predictions and ground truth |
coco analyse | calcualate and export many detection and segmentation error margin plots |
coco yolov5 | automatically convert any COCO dataset to yolov5 format |
Getting Started
Blogpost
Check the official SAHI blog post.
Installation
- Install
sahi
using pip:
pip install sahi
- On Windows,
Shapely
needs to be installed via Conda:
conda install -c conda-forge shapely
- Install your desired version of pytorch and torchvision:
pip install torch torchvision
- Install your desired detection framework (such as mmdet or yolov5):
pip install mmdet mmcv-full
pip install yolov5
Usage
From Python:
- Sliced inference:
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
Check MMDetection
+ SAHI
demo:
- Slice an image:
from sahi.slicing import slice_image
slice_image_result = slice_image(
image=image_path,
output_file_name=output_file_name,
output_dir=output_dir,
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
- Slice a coco formatted dataset:
from sahi.slicing import slice_coco
coco_dict, coco_path = slice_coco(
coco_annotation_file_path=coco_annotation_file_path,
image_dir=image_dir,
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
Refer to slicing notebook for detailed usage.
From CLI:
sahi predict --source image/file/or/folder --model_path path/to/model --model_config_path path/to/config
will perform sliced inference on default parameters and export the prediction visuals to runs/predict/exp folder.
You can specify sliced inference parameters as:
sahi predict --slice_width 256 --slice_height 256 --overlap_height_ratio 0.1 --overlap_width_ratio 0.1 --model_confidence_threshold 0.25 --source image/file/or/folder --model_path path/to/model --model_config_path path/to/config
-
Specify postprocess type as
--postprocess_type GREEDYNMM
or--postprocess_type NMS
to be applied over sliced predictions -
Specify postprocess match metric as
--postprocess_match_metric IOS
for intersection over smaller area or--match_metric IOU
for intersection over union -
Specify postprocess match threshold as
--postprocess_match_threshold 0.5
-
Add
--class_agnostic
argument to ignore category ids of the predictions during postprocess (merging/nms) -
If you want to export prediction pickles and cropped predictions add
--export_pickle
and--export_crop
arguments. If you want to change crop extension type, set it as--visual_export_format JPG
. -
If you want to export prediction visuals, add
--export_visual
argument. -
By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add
--no_sliced_prediction
argument. If you don't want to perform standard prediction add--no_standard_prediction
argument. -
If you want to perform prediction using a COCO annotation file, provide COCO json path as add
--dataset_json_path dataset.json
and coco image folder as--source path/to/coco/image/folder
, predictions will be exported as a coco json file to runs/predict/exp/results.json. Then you can usecoco_evaluation
command to calculate COCO evaluation results orcoco_error_analysis
command to calculate detailed COCO error plots.
Find detailed info on cli command usage (coco fiftyone
, coco yolov5
, coco evaluate
, coco analyse
) at CLI.md.
FiftyOne Utilities
Explore COCO dataset via FiftyOne app:
For supported version: pip install fiftyone>=0.14.2<0.15.0
from sahi.utils.fiftyone import launch_fiftyone_app
# launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
# close fiftyone app:
session.close()
Convert predictions to FiftyOne detection:
from sahi import get_sliced_prediction
# perform sliced prediction
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
# convert detections into fiftyone detection format
fiftyone_detections = result.to_fiftyone_detections()
Explore detection results in Fiftyone UI:
sahi coco fifityone --image_dir dir/to/images --dataset_json_path dataset.json cocoresult1.json cocoresult2.json
will open a FiftyOne app that visualizes the given dataset and 2 detection results.
Specify IOU threshold for FP/TP by --iou_threshold 0.5
argument
COCO Utilities
COCO dataset creation:
- import required classes:
from sahi.utils.coco import Coco, CocoCategory, CocoImage, CocoAnnotation
- init Coco object:
coco = Coco()
- add categories starting from id 0:
coco.add_category(CocoCategory(id=0, name='human'))
coco.add_category(CocoCategory(id=1, name='vehicle'))
- create a coco image:
coco_image = CocoImage(file_name="image1.jpg", height=1080, width=1920)
- add annotations to coco image:
coco_image.add_annotation(
CocoAnnotation(
bbox=[x_min, y_min, width, height],
category_id=0,
category_name='human'
)
)
coco_image.add_annotation(
CocoAnnotation(
bbox=[x_min, y_min, width, height],
category_id=1,
category_name='vehicle'
)
)
- add coco image to Coco object:
coco.add_image(coco_image)
- after adding all images, convert coco object to coco json:
coco_json = coco.json
- you can export it as json file:
from sahi.utils.file import save_json
save_json(coco_json, "coco_dataset.json")
Convert COCO dataset to ultralytics/yolov5 format:
from sahi.utils.coco import Coco
# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json", image_dir="coco_images/")
# export converted YoloV5 formatted dataset into given output_dir with a 85% train/15% val split
coco.export_as_yolov5(
output_dir="output/folder/dir",
train_split_rate=0.85
)
Get dataset stats:
from sahi.utils.coco import Coco
# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json")
# get dataset stats
coco.stats
{
'num_images': 6471,
'num_annotations': 343204,
'num_categories': 2,
'num_negative_images': 0,
'num_images_per_category': {'human': 5684, 'vehicle': 6323},
'num_annotations_per_category': {'human': 106396, 'vehicle': 236808},
'min_num_annotations_in_image': 1,
'max_num_annotations_in_image': 902,
'avg_num_annotations_in_image': 53.037243084530985,
'min_annotation_area': 3,
'max_annotation_area': 328640,
'avg_annotation_area': 2448.405738278109,
'min_annotation_area_per_category': {'human': 3, 'vehicle': 3},
'max_annotation_area_per_category': {'human': 72670, 'vehicle': 328640},
}
Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at COCO.md.
MOT Challenge Utilities
MOT Challenge formatted ground truth dataset creation:
- import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
- init video:
mot_video = MotVideo(name="sequence_name")
- init first frame:
mot_frame = MotFrame()
- add annotations to frame:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
- add frame to video:
mot_video.add_frame(mot_frame)
- export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
- your MOT challenge formatted ground truth files are ready under
mot_gt/sequence_name/
folder.
Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at MOT.md.
Citation
If you use this package in your work, please cite it as:
@software{akyon2021sahi,
author = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan},
title = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}},
month = nov,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.5718950},
url = {https://doi.org/10.5281/zenodo.5718950}
}
Contributing
sahi
library currently supports all YOLOv5 models and MMDetection models. Moreover, it is easy to add new frameworks.
All you need to do is, creating a new class in model.py that implements DetectionModel class. You can take the MMDetection wrapper or YOLOv5 wrapper as a reference.
Before opening a PR:
- Install required development packages:
pip install -U -e .[dev]
- Reformat with black and isort:
black . --config pyproject.toml
isort .
Contributors
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sahi-0.8.22.tar.gz
.
File metadata
- Download URL: sahi-0.8.22.tar.gz
- Upload date:
- Size: 72.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | efe69997141211dead0062d9f7d20707824accd006d59f2add9ea1757ddea1ca |
|
MD5 | eb927d36d8af21c4f591d8ad968f69bf |
|
BLAKE2b-256 | b914c575d67dd4d847d3881409cf5981411d4622cc4eb91c3786783df7079e94 |
File details
Details for the file sahi-0.8.22-py3-none-any.whl
.
File metadata
- Download URL: sahi-0.8.22-py3-none-any.whl
- Upload date:
- Size: 78.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 763651bbca7ec525512a97df36f399fcb283ea36c567ae1729b3f77e6434cce6 |
|
MD5 | 738322d86317fb8f4ede8c0782014fc3 |
|
BLAKE2b-256 | 2cf271bc5ef440d6e68ecfdc1d7cae66bee1c05c9303c6f8ddbdb88070d6aae0 |