A package for clip-guided diffusion
Project description
SAM + CLIP + DIFFUSION
Stuff about this pipeline was a bit harder to run and follow, I wanted to write a simple package to allow people to use it with a higher level of abstraction. Based on the repos on the citation part.
Installation
pip install samclipdiffusion
Usage
!wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
from samclipdiffusion import ImageSegmenter
segmenter_inp = ImageSegmenter()
image_path = 'image_path'
search_text = "target object"
prompt = "how to modify"
inpainted_image = segmenter_inp.inpaint_image(image_path, search_text, prompt)
Examples
Examples with and w/o installing the package.
Citation
This repository is based on the following repos, I just merged them together and made some changes to make it work.
https://github.com/IDEA-Research/Grounded-Segment-Anything/blob/main/grounded_sam.ipynb https://github.com/maxi-w/CLIP-SAM/blob/main/main.ipynb
TODO
- Add more examples
- Optimize, make it memory efficient, it's awful right now
- Add better images to the examples :D
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for samclipdiffusion-0.0.1b2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c10f0dff36f538aa7ef33c0e69ed3d94deb12f9cdcf093784b62776ddead9106 |
|
MD5 | 64a33dfd1c6f1939bf035881eb563a61 |
|
BLAKE2b-256 | 67128f9de53e8f4101484d1644709ff900d82277cace6f9f76bfe962c709dcd3 |