A vision library for performing sliced inference on large images/small objects
Project description
SAHI: Slicing Aided Hyper Inference
A vision library for performing sliced inference on large images/small objects.
Overview
Getting Started
Blogpost
Check the official SAHI blog post.
Installation
- Install
sahi
using pip:
pip install sahi
- On Windows,
Shapely
needs to be installed via Conda:
conda install -c conda-forge shapely
- Install your desired version of pytorch and torchvision:
pip install torch torchvision
- Install your desired detection framework (such as mmdet or yolov5):
pip install mmdet mmcv
pip install yolov5
Usage
From Python:
- Sliced inference:
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
Check MMDetection
+ SAHI
demo:
- Slice an image:
from sahi.slicing import slice_image
slice_image_result = slice_image(
image=image_path,
output_file_name=output_file_name,
output_dir=output_dir,
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
- Slice a coco formatted dataset:
from sahi.slicing import slice_coco
coco_dict, coco_path = slice_coco(
coco_annotation_file_path=coco_annotation_file_path,
image_dir=image_dir,
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
)
Refer to slicing notebook for detailed usage.
From CLI:
python scripts/predict.py --source image/file/or/folder --model_path path/to/model --config_path path/to/config
will perform sliced inference on default parameters and export the prediction visuals to runs/predict/exp folder.
You can specify sliced inference parameters as:
python scripts/predict.py --slice_width 256 --slice_height 256 --overlap_height_ratio 0.1 --overlap_width_ratio 0.1 --iou_thresh 0.25 --source image/file/or/folder --model_path path/to/model --config_path path/to/config
-
Specify postprocess type as
--postprocess_type UNIONMERGE
or--postprocess_type NMS
to be applied over sliced predictions -
Specify postprocess match metric as
--match_metric IOS
for intersection over smaller area or--match_metric IOU
for intersection over union -
Specify postprocess match threshold as
--match_thresh 0.5
-
Add
--class_agnostic
argument to ignore category ids of the predictions during postprocess (merging/nms) -
If you want to export prediction pickles and cropped predictions add
--pickle
and--crop
arguments. If you want to change crop extension type, set it as--visual_export_format JPG
. -
If you don't want to export prediction visuals, add
--novisual
argument. -
If you want to perform standard prediction instead of sliced prediction, add
--standard_pred
argument.
python scripts/predict.py --coco_file path/to/coco/file --source coco/images/directory --model_path path/to/model --config_path path/to/config
will perform inference using provided coco file, then export results as a coco json file to runs/predict/exp/results.json
Find detailed info on script usage (predict, coco2yolov5, coco_error_analysis) at SCRIPTS.md.
FiftyOne Utilities
Explore COCO dataset via FiftyOne app:
For supported version: pip install fiftyone>=0.11.1
from sahi.utils.fiftyone import launch_fiftyone_app
# launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
# close fiftyone app:
session.close()
Convert predictions to FiftyOne detection:
from sahi import get_sliced_prediction
# perform sliced prediction
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
# convert detections into fiftyone detection format
fiftyone_detections = result.to_fiftyone_detections()
COCO Utilities
COCO dataset creation:
- import required classes:
from sahi.utils.coco import Coco, CocoCategory, CocoImage, CocoAnnotation
- init Coco object:
coco = Coco()
- add categories starting from id 0:
coco.add_category(CocoCategory(id=0, name='human'))
coco.add_category(CocoCategory(id=1, name='vehicle'))
- create a coco image:
coco_image = CocoImage(file_name="image1.jpg", height=1080, width=1920)
- add annotations to coco image:
coco_image.add_annotation(
CocoAnnotation(
bbox=[x_min, y_min, width, height],
category_id=0,
category_name='human'
)
)
coco_image.add_annotation(
CocoAnnotation(
bbox=[x_min, y_min, width, height],
category_id=1,
category_name='vehicle'
)
)
- add coco image to Coco object:
coco.add_image(coco_image)
- after adding all images, convert coco object to coco json:
coco_json = coco.json
- you can export it as json file:
from sahi.utils.file import save_json
save_json(coco_json, "coco_dataset.json")
Convert COCO dataset to ultralytics/yolov5 format:
from sahi.utils.coco import Coco
# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json", image_dir="coco_images/")
# export converted YoloV5 formatted dataset into given output_dir with a 85% train/15% val split
coco.export_as_yolov5(
output_dir="output/folder/dir",
train_split_rate=0.85
)
Get dataset stats:
from sahi.utils.coco import Coco
# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json")
# get dataset stats
coco.stats
{
'num_images': 6471,
'num_annotations': 343204,
'num_categories': 2,
'num_negative_images': 0,
'num_images_per_category': {'human': 5684, 'vehicle': 6323},
'num_annotations_per_category': {'human': 106396, 'vehicle': 236808},
'min_num_annotations_in_image': 1,
'max_num_annotations_in_image': 902,
'avg_num_annotations_in_image': 53.037243084530985,
'min_annotation_area': 3,
'max_annotation_area': 328640,
'avg_annotation_area': 2448.405738278109,
'min_annotation_area_per_category': {'human': 3, 'vehicle': 3},
'max_annotation_area_per_category': {'human': 72670, 'vehicle': 328640},
}
Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at COCO.md.
MOT Challenge Utilities
MOT Challenge formatted ground truth dataset creation:
- import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
- init video:
mot_video = MotVideo(name="sequence_name")
- init first frame:
mot_frame = MotFrame()
- add annotations to frame:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
- add frame to video:
mot_video.add_frame(mot_frame)
- export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
- your MOT challenge formatted ground truth files are ready under
mot_gt/sequence_name/
folder.
Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at MOT.md.
Contributing
sahi
library currently supports all YOLOv5 models and MMDetection models. Moreover, it is easy to add new frameworks.
All you need to do is, creating a new class in model.py that implements DetectionModel class. You can take the MMDetection wrapper or YOLOv5 wrapper as a reference.
Contributers
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sahi-0.5.1.tar.gz
.
File metadata
- Download URL: sahi-0.5.1.tar.gz
- Upload date:
- Size: 54.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 89f625a6ddc71054dc22a15ef85596fb261978c45cdac7ab33a8461c40fa6437 |
|
MD5 | 8ea5148e97c6f2079721342f692eb19b |
|
BLAKE2b-256 | 49663b9664219773ed73d19774001fc49c2e61bd29ac542f0638caacc3d95b5f |
File details
Details for the file sahi-0.5.1-py3-none-any.whl
.
File metadata
- Download URL: sahi-0.5.1-py3-none-any.whl
- Upload date:
- Size: 57.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 38c315122c92f640bafc9a1950e2a743c18d466b7f4c41f43ada697215a1c1e6 |
|
MD5 | 6939986cfe2d0cdbbaf2666a3c57caed |
|
BLAKE2b-256 | 60c12644d6c5c18541ca7cadb02479d6582405d83e05e580946aeb439983fb9e |