A package for clip-guided diffusion
Project description
SAM + CLIP + DIFFUSION
Stuff about this pipeline was a bit harder to run and follow, I wanted to write a simple package to allow people to use it with a higher level of abstraction. Based on the repos on the citation part.
Installation
pip install samclipdiffusion
Usage
!wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
from samclipdiffusion import ImageSegmenter
segmenter_inp = ImageSegmenter()
image_path = 'image_path'
search_text = "target object"
prompt = "how to modify"
inpainted_image = segmenter_inp.inpaint_image(image_path, search_text, prompt)
Examples
Examples with and w/o installing the package.
Citation
This repository is based on the following repos, I just merged them together and made some changes to make it work.
https://github.com/IDEA-Research/Grounded-Segment-Anything/blob/main/grounded_sam.ipynb https://github.com/maxi-w/CLIP-SAM/blob/main/main.ipynb
TODO
- Add more examples
- Optimize, make it memory efficient, it's awful right now
- Add better images to the examples :D
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for samclipdiffusion-0.0.3b0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 159fab398d3f38d7f825f91c0aaa692df3fa9b61755d5c163ff7b82b2e6d7a70 |
|
MD5 | 11792a527c4467e74ab16ce71cc9d518 |
|
BLAKE2b-256 | 074c940731f56398833af01028ee14c08c9dd382c6c0df076f67f63c59f2fc4f |