This lib is about a simple add-on over CLIP by OpenAI for Unsupervised Object Detection (Zeroshot). You can search bounding boxes of objects using NATURAL LANGUAGE UNDERSTANDING - no classes, only text.
Project description
[CLIP ODS] CLIP Object Detection & Segmentation
This repo is about a simple add-on over CLIP by OpenAI for Unsupervised Object Detection (Zeroshot). You can search bounding boxes of objects using NATURAL LANGUAGE UNDERSTANDING - no classes, only text.
INSTALLING
pip install clip_ods
Minimal Example
import torch
import gdown
import matplotlib.pyplot as plt
from PIL import Image
from clip_ods import clip, CLIPDetectorV0
from clip_ods.utils import get_anchor_coords
COUNT_W, COUNT_H = 13, 13
device = torch.device('cuda:0')
model, preprocess = clip.load("RN50x4", device=device)
clip_detector = CLIPDetectorV0(model, preprocess, device)
img_path = 'example5.png'
gdown.download(f'https://drive.google.com/uc?id=1nMPyWquE7U7_fuh0Rk4ZGgeWAtCFEqi8', './', quiet=True);
img = Image.open(img_path)
coords = get_anchor_coords(img, count_w=COUNT_W, count_h=COUNT_H)
anchor_features = clip_detector.get_anchor_features(img, coords)
res_img, res, _ = clip_detector.detect_by_text(
texts=['green leaves', 'leaves', 'foliage', 'green foliage'],
coords=coords,
anchor_features=anchor_features,
img=Image.open(img_path),
proba_thr=0.8
)
plt.figure(num=None, figsize=(8, 8), dpi=120, facecolor='w', edgecolor='k')
plt.imshow(res_img);
Authors:
Supported by:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
clip_ods-0.0.1rc0.tar.gz
(1.5 MB
view hashes)
Built Distribution
Close
Hashes for clip_ods-0.0.1rc0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 542a9a9a1f9174a923e24ab7cf90b67e563e8b63c7fc2f9c87dd6b35bfd80281 |
|
MD5 | 24735ee90366e9eb2fecabf7a698cb83 |
|
BLAKE2b-256 | c4a1a97a3bd0819dbcb91fc93f26399aaa94a85cccb490245758a338b4d9c68e |