A set of easy-to-use utils that will come in handy in any Computer Vision project
Project description
👋 hello
We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝
💻 install
Pip install the supervision package in a 3.11>=Python>=3.8 environment.
pip install supervision[desktop]
Read more about desktop, headless, and local installation in our guide.
🔥 quickstart
detections processing
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> model = YOLO('yolov8s.pt')
>>> result = model(IMAGE)[0]
>>> detections = sv.Detections.from_ultralytics(result)
>>> len(detections)
5
👉 more detections utils
-
Easily switch inference pipeline between supported object detection/instance segmentation models
>>> import supervision as sv >>> from segment_anything import sam_model_registry, SamAutomaticMaskGenerator >>> sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE) >>> mask_generator = SamAutomaticMaskGenerator(sam) >>> sam_result = mask_generator.generate(IMAGE) >>> detections = sv.Detections.from_sam(sam_result=sam_result)
-
>>> detections = detections[detections.class_id == 0] >>> detections = detections[detections.confidence > 0.5] >>> detections = detections[detections.area > 1000]
-
Image annotation
>>> import supervision as sv >>> box_annotator = sv.BoxAnnotator() >>> annotated_frame = box_annotator.annotate( ... scene=IMAGE, ... detections=detections ... )
datasets processing
>>> import supervision as sv
>>> dataset = sv.DetectionDataset.from_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )
>>> dataset.classes
['dog', 'person']
>>> len(dataset)
1000
👉 more dataset utils
-
Load object detection/instance segmentation datasets in one of the supported formats
>>> dataset = sv.DetectionDataset.from_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ) >>> dataset = sv.DetectionDataset.from_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... ) >>> dataset = sv.DetectionDataset.from_coco( ... images_directory_path='...', ... annotations_path='...' ... )
-
Loop over dataset entries
>>> for name, image, labels in dataset: ... print(labels.xyxy) array([[404. , 719. , 538. , 884.5 ], [155. , 497. , 404. , 833.5 ], [ 20.154999, 347.825 , 416.125 , 915.895 ]], dtype=float32)
-
Split dataset for training, testing, and validation
>>> train_dataset, test_dataset = dataset.split(split_ratio=0.7) >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5) >>> len(train_dataset), len(test_dataset), len(valid_dataset) (700, 150, 150)
-
Merge multiple datasets
>>> ds_1 = sv.DetectionDataset(...) >>> len(ds_1) 100 >>> ds_1.classes ['dog', 'person'] >>> ds_2 = sv.DetectionDataset(...) >>> len(ds_2) 200 >>> ds_2.classes ['cat'] >>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) >>> len(ds_merged) 300 >>> ds_merged.classes ['cat', 'dog', 'person']
-
Save object detection/instance segmentation datasets in one of the supported formats
>>> dataset.as_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ) >>> dataset.as_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... ) >>> dataset.as_coco( ... images_directory_path='...', ... annotations_path='...' ... )
-
Convert labels between supported formats
>>> sv.DetectionDataset.from_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ).as_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... )
-
Load classification datasets in one of the supported formats
>>> cs = sv.ClassificationDataset.from_folder_structure( ... root_directory_path='...' ... )
-
Save classification datasets in one of the supported formats
>>> cs.as_folder_structure( ... root_directory_path='...' ... )
model evaluation
>>> import supervision as sv
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... ...
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
👉 more metrics
-
Mean average precision (mAP) for object detection tasks.
>>> import supervision as sv >>> dataset = sv.DetectionDataset.from_yolo(...) >>> def callback(image: np.ndarray) -> sv.Detections: ... ... >>> mean_average_precision = sv.MeanAveragePrecision.benchmark( ... dataset = dataset, ... callback = callback ... ) >>> mean_average_precision.map50_95 0.433
🛠️ built with supervision
Did you build something cool using supervision? Let us know!
🎬 tutorials
Accelerate Image Annotation with SAM and Grounding DINO
Discover how to speed up your image annotation process using Grounding DINO and Segment Anything Model (SAM). Learn how to convert object detection datasets into instance segmentation datasets, and see the potential of using these models to automatically annotate your datasets for real-time detectors like YOLOv8...
SAM - Segment Anything Model by Meta AI: Complete Guide
Discover the incredible potential of Meta AI's Segment Anything Model (SAM)! We dive into SAM, an efficient and promptable model for image segmentation, which has revolutionized computer vision tasks. With over 1 billion masks on 11M licensed and privacy-respecting images, SAM's zero-shot performance is often competitive with or even superior to prior fully supervised results...
📚 documentation
Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.
🏆 contribution
We love your input! Please see our contributing guide to get started. Thank you 🙏 to all our contributors!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file supervision-0.14.0b2.tar.gz
.
File metadata
- Download URL: supervision-0.14.0b2.tar.gz
- Upload date:
- Size: 53.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39fe49963bc33728cd1d9bc751e8eba02cb57c977eea0573a68c7b377c17508a |
|
MD5 | 9a543f147a3d9cf2679c615f6080eb99 |
|
BLAKE2b-256 | 5dd5b86607dde5262016c8e29cc0bfbe9040935c58ecbf197c1a577fef0545ce |
File details
Details for the file supervision-0.14.0b2-py3-none-any.whl
.
File metadata
- Download URL: supervision-0.14.0b2-py3-none-any.whl
- Upload date:
- Size: 63.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 571385fafac48a66e07184272df5ae20e5dde88af2cc8a195793c8b3f6afab9f |
|
MD5 | 6363e34c29bf45e38c77c10c3057412e |
|
BLAKE2b-256 | 3b6b2e70be5ac353d48cb60af934a1afa400d0bebce9c1dce439e82115272ffb |