Skip to main content

A vision library for performing sliced inference on large images/small objects

Project description

SAHI: Slicing Aided Hyper Inference

A vision library for performing sliced inference on large images/small objects.

teaser

Open In Colab downloads downloads
pypi version conda version ci

Overview

Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems.

Getting Started

Blogpost

Check the official SAHI blog post.

Installation
  • Install sahi using pip:
pip install sahi
  • On Windows, Shapely needs to be installed via Conda:
conda install -c conda-forge shapely
  • Install your desired version of pytorch and torchvision:
pip install torch torchvision
  • Install your desired detection framework (such as mmdet or yolov5):
pip install mmdet mmcv
pip install yolov5

Usage

From Python:
  • Sliced inference:
result = get_sliced_prediction(
    image,
    detection_model,
    slice_height = 256,
    slice_width = 256,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)

Check YOLOv5 + SAHI demo: Open In Colab

Check MMDetection + SAHI demo: Open In Colab

  • Slice an image:
from sahi.slicing import slice_image

slice_image_result, num_total_invalid_segmentation = slice_image(
    image=image_path,
    output_file_name=output_file_name,
    output_dir=output_dir,
    slice_height=256,
    slice_width=256,
    overlap_height_ratio=0.2,
    overlap_width_ratio=0.2,
)
  • Slice a coco formatted dataset:
from sahi.slicing import slice_coco

coco_dict, coco_path = slice_coco(
    coco_annotation_file_path=coco_annotation_file_path,
    image_dir=image_dir,
    slice_height=256,
    slice_width=256,
    overlap_height_ratio=0.2,
    overlap_width_ratio=0.2,
)

Refer to slicing notebook for detailed usage.

From CLI:
python scripts/predict.py --source image/file/or/folder --model_path path/to/model --config_path path/to/config

will perform sliced inference on default parameters and export the prediction visuals to runs/predict/exp folder.

You can specify sliced inference parameters as:

python scripts/predict.py --slice_width 256 --slice_height 256 --overlap_height_ratio 0.1 --overlap_width_ratio 0.1 --iou_thresh 0.25 --source image/file/or/folder --model_path path/to/model --config_path path/to/config
  • Specify postprocess type as --postprocess_type UNIONMERGE or --postprocess_type NMS to be applied over sliced predictions

  • Specify postprocess match metric as --match_metric IOS for intersection over smaller area or --match_metric IOU for intersection over union

  • Specify postprocess match threshold as --match_thresh 0.5

  • Add --class_agnostic argument to ignore category ids of the predictions during postprocess (merging/nms)

  • If you want to export prediction pickles and cropped predictions add --pickle and --crop arguments. If you want to change crop extension type, set it as --visual_export_format JPG.

  • If you don't want to export prediction visuals, add --novisual argument.

  • If you want to perform standard prediction instead of sliced prediction, add --standard_pred argument.

python scripts/predict.py --coco_file path/to/coco/file --source coco/images/directory --model_path path/to/model --config_path path/to/config

will perform inference using provided coco file, then export results as a coco json file to runs/predict/exp/results.json

Find detailed info on script usage (predict, coco2yolov5, coco_error_analysis) at SCRIPTS.md.

COCO Utilities

COCO dataset creation:
  • import required classes:
from sahi.utils.coco import Coco, CocoCategory, CocoImage, CocoAnnotation
  • init Coco object:
coco = Coco()
  • add categories starting from id 0:
coco.add_category(CocoCategory(id=0, name='human'))
coco.add_category(CocoCategory(id=1, name='vehicle'))
  • create a coco image:
coco_image = CocoImage(file_name="image1.jpg", height=1080, width=1920)
  • add annotations to coco image:
coco_image.add_annotation(
  CocoAnnotation(
    bbox=[x_min, y_min, width, height],
    category_id=0,
    category_name='human'
  )
)
coco_image.add_annotation(
  CocoAnnotation(
    bbox=[x_min, y_min, width, height],
    category_id=1,
    category_name='vehicle'
  )
)
  • add coco image to Coco object:
coco.add_image(coco_image)
  • after adding all images, convert coco object to coco json:
coco_json = coco.json
  • you can export it as json file:
from sahi.utils.file import save_json

save_json(coco_json, "coco_dataset.json")
Convert COCO dataset to ultralytics/yolov5 format:
from sahi.utils.coco import Coco

# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json", image_dir="coco_images/")

# export converted YoloV5 formatted dataset into given output_dir with a 85% train/15% val split
coco.export_as_yolov5(
  output_dir="output/folder/dir",
  train_split_rate=0.85
)
Get dataset stats:
from sahi.utils.coco import Coco

# init Coco object
coco = Coco.from_coco_dict_or_path("coco.json")

# get dataset stats
coco.stats
{
  'num_images': 6471,
  'num_annotations': 343204,
  'num_categories': 2,
  'num_negative_images': 0,
  'num_images_per_category': {'human': 5684, 'vehicle': 6323},
  'num_annotations_per_category': {'human': 106396, 'vehicle': 236808},
  'min_num_annotations_in_image': 1,
  'max_num_annotations_in_image': 902,
  'avg_num_annotations_in_image': 53.037243084530985,
  'min_annotation_area': 3,
  'max_annotation_area': 328640,
  'avg_annotation_area': 2448.405738278109,
  'min_annotation_area_per_category': {'human': 3, 'vehicle': 3},
  'max_annotation_area_per_category': {'human': 72670, 'vehicle': 328640},
}

Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at COCO.md.

MOT Challenge Utilities

MOT Challenge formatted ground truth dataset creation:
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add annotations to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
  • your MOT challenge formatted ground truth files are ready under mot_gt/sequence_name/ folder.

Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at MOT.md.

Contributing

sahi library currently supports all YOLOv5 models and MMDetection models. Moreover, it is easy to add new frameworks.

All you need to do is, creating a new class in model.py that implements DetectionModel class. You can take the MMDetection wrapper or YOLOv5 wrapper as a reference.

Contributers

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sahi-0.4.6.tar.gz (51.4 kB view details)

Uploaded Source

Built Distribution

sahi-0.4.6-py3-none-any.whl (54.6 kB view details)

Uploaded Python 3

File details

Details for the file sahi-0.4.6.tar.gz.

File metadata

  • Download URL: sahi-0.4.6.tar.gz
  • Upload date:
  • Size: 51.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for sahi-0.4.6.tar.gz
Algorithm Hash digest
SHA256 c03083ad3cdab325e09734f3b89363a380b3fe75b6d98ad27d0477e9fcf2b934
MD5 efd37dc3333866f479b1e9e5bae5c56b
BLAKE2b-256 bdbd93a179f825c0732641a8ef4ed3416481b79180985bfee91e90311037fb15

See more details on using hashes here.

File details

Details for the file sahi-0.4.6-py3-none-any.whl.

File metadata

  • Download URL: sahi-0.4.6-py3-none-any.whl
  • Upload date:
  • Size: 54.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.9.5

File hashes

Hashes for sahi-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e62e83f58d75ae004044a5aeced124825d3b9dac307849af71e90e3d09945d15
MD5 81f6ae1adb433437a20ac1c84d958e75
BLAKE2b-256 e42fe56fcc9baaa3531387005cfe984a7ceb44c095bab90c5418de9702098124

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page