Skip to main content

Extract sections from your image by using OpenAI CLIP and Facebooks Detr implemented on HuggingFace Transformers

Project description

clipcrop

  • Extract sections of images from your image by using OpenAI's CLIP and Facebooks Detr implemented on HuggingFace Transformers
  • Added new capability for segmentation using CLIP and Detr segmentation models

Why Detr?

Facebook's Detr is one of most effective object detection algorithm developed in the recent years. It simply expands to Detection Transformers and effectively a CNN architecture followed by Transformers encoders and decoders. It uses biopartite matching loss to compare objects detected in an image and reasons the predictions with the global image. Images are processed via CNN and encoder layer to output hidden states [number of images, seq_length, d_model] and object_queries [number of images, num of objects, d_model] are sent through decoders to get the neccessary logits for classification and MLP for regression(bounding box) Below are reason why you should prefer Detr over some popular algorithms

  • It's single step detector and it's efficiency is on par and better than two stage detectors like RCNN and Fast RCNN.
  • Compared to Yolo and SSD which are one stage detector DeTr performs detection on the whole image rather than grids of images unlike what we see in Yolo.

Installation

pip install clipcrop

Clip Crop

Extract sections of images from your image by using OpenAI's CLIP and Facebooks Detr implemented on HuggingFace Transformers (Inspired from @vijishmadhavan)

Implementation

from clipcrop import clipcrop
clipc = clipcrop.ClipCrop("/content/nm.jpg", "woman in white frock")
DFE, DM, CLIPM, CLIPP = clipc.load_models()
result = clipc.extract_image(DFE, DM, CLIPM, CLIPP)
# gives a list of dicitonary of top images and its relative similarity score and you can override this by setting num = 5  to get top 5 etc while initiating the class

Captcha

Solve captacha images using CLIP and Object detection models.

from clipcrop import clipcrop
# second arguement is the text prompt eg:image of cars
clipc = clipcrop.ClipCrop(image_path, "image of cars")
#loading models, processors, feature extractors
DFE, DM, CLIPM, CLIPP = clipc.load_models()
#generally keep high threshold to avoid noises
result = clipc.captcha(CLIPM, CLIPP, 4)

Clip Segmentation

Segment out images using Detr Panoptic segmentation pipeline and leverage CLIP models to derive at the most probable one for your query

Implementation

from clipcrop import clipcrop
clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")
segmentor, clipmodel, clipprocessor = clipseg.load_models()
result = clipseg.segment_image(segmentor, clipmodel, clipprocessor)
# gives a list of dicitonary of top images and its relative similarity score and you can override this by setting num = 5  to get top 5 etc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clipcrop-2.4.2.tar.gz (6.3 kB view details)

Uploaded Source

File details

Details for the file clipcrop-2.4.2.tar.gz.

File metadata

  • Download URL: clipcrop-2.4.2.tar.gz
  • Upload date:
  • Size: 6.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for clipcrop-2.4.2.tar.gz
Algorithm Hash digest
SHA256 d06a5d5666b8b01eb6f09d24684d3850e4b4849c2878072ffb82592dd523c9f0
MD5 5630e6d6bbbd1a825847096ae3327d35
BLAKE2b-256 b6cec50f499bb314c83069f5b71c74aaa0a335c0efafda130d9670dbc9e129f6

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page