Prompt based automatic annotation
Project description
pbaa : Prompt-Based Automatic Annotation
Easy inference implementation of Grounded-SAM for Prompt-based automatic annotation
Table of Contents
Installation
pip install pbaa
Usage
Options
Usage: pbaa [OPTIONS]
Options:
--version Show the version and exit.
-s, --src TEXT Source image or directory path [required]
-p, --prompt <TEXT TEXT>... Space-separated a pair of prompt and target
classe. (Multi) [required]
-b, --box_threshold FLOAT Threshold for Object Detection (default: 0.25)
-n, --nms_threshold FLOAT Threshold for NMS (default: 0.8)
-o, --output_dir TEXT Path to result data (default: '.')
-h, --help Show this message and exit.
CLI
# pbaa -s <Source> -p <prompt> <class> -p <prompt> <class> ...
pbaa -s source_image.jpg -p "black dog" dog
pbaa -s source_image.jpg -p "black dog" dog -p "white cat" cat
Python
from pbaa import model_init, inference
model_init()
inference("path/to/source_image.jpg", {"black dog": "dog", "white cat": "cat"})
Demo
## Source : assets/demo9.jpg
## prompts : {"plant" : "plant", "picture" : "picture", "dog": "dog", "lamp" : "lamp", "carpet" : "carpet", "sofa" : "sofa"}
pbaa -s assets/demo9.jpg -p plant plant -p picture picture -p dog dog -p lamp lamp -p carpet carpet -p sofa sofa
Origin | Detection | Segmentation |
---|---|---|
Result data
json structure
index
├ cls : class name
├ conf : confidence score
├ box : bounding box coordinates
└ poly : polygon coordinates
License
pbaa
is distributed under the terms of the Apache-2.0 license.
Acknowledgements
Grounded-Segment-Anything : https://github.com/IDEA-Research/Grounded-Segment-Anything
Grounding DINO : https://github.com/IDEA-Research/GroundingDINO
Segment-anything : https://github.com/facebookresearch/segment-anything
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pbaa-1.0.2.tar.gz
(8.1 MB
view hashes)
Built Distribution
pbaa-1.0.2-py3-none-any.whl
(13.0 kB
view hashes)