This lib is about a simple add-on over CLIP by OpenAI for Unsupervised Object Detection (Zeroshot). You can search bounding boxes of objects using NATURAL LANGUAGE UNDERSTANDING - no classes, only text.
Project description
[CLIP ODS] CLIP Object Detection & Segmentation
This repo is about a simple add-on over CLIP by OpenAI for Unsupervised Object Detection (Zeroshot). You can search bounding boxes of objects using NATURAL LANGUAGE UNDERSTANDING - no classes, only text.
INSTALLING
pip install clip_ods
Minimal Example
import torch
import gdown
import matplotlib.pyplot as plt
from PIL import Image
from clip_ods import clip, CLIPDetectorV0
from clip_ods.utils import get_anchor_coords
COUNT_W, COUNT_H = 13, 13
device = torch.device('cuda:0')
model, preprocess = clip.load("RN50x4", device=device)
clip_detector = CLIPDetectorV0(model, preprocess, device)
img_path = 'example5.png'
gdown.download(f'https://drive.google.com/uc?id=1nMPyWquE7U7_fuh0Rk4ZGgeWAtCFEqi8', './', quiet=True);
img = Image.open(img_path)
coords = get_anchor_coords(img, count_w=COUNT_W, count_h=COUNT_H)
anchor_features = clip_detector.get_anchor_features(img, coords)
res_img, res, _ = clip_detector.detect_by_text(
texts=['green leaves', 'leaves', 'foliage', 'green foliage'],
coords=coords,
anchor_features=anchor_features,
img=Image.open(img_path),
proba_thr=0.8
)
plt.figure(num=None, figsize=(8, 8), dpi=120, facecolor='w', edgecolor='k')
plt.imshow(res_img);
Authors:
Supported by:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
clip_ods-0.0.1rc1.tar.gz
(1.4 MB
view hashes)
Built Distribution
Close
Hashes for clip_ods-0.0.1rc1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ef9c08a199f36023ae235fc672166ef7bf96a5172e47dc6d7625bc63ad986b54 |
|
MD5 | 4a1c65db2a80472238fe327d590572c3 |
|
BLAKE2b-256 | bc612240b0ae0aff6e044e2232b73e5b5c2614aa524a05cf0d031083b3b00c39 |