Paper - Pytorch
Project description
ClipQ
An easy-to-use interface for experimenting with OpenAI's CLIP model by encoding image quadrants. By splitting images into quadrants and encoding each with CLIP, we can explore how the model perceives various parts of an image.
Appreciation
- Christopher in LAION for the idea
- Thanks to OpenAI for the CLIP model.
- Inspiration drawn from various CLIP-related projects in the community.
Table of Contents
Installation
Install the package via pip:
pip install clipq
Quickstart
Here's a brief example to get you started:
from clipq.main import CLIPQ
#init
test = CLIPQ(query_text="A photo of a cat")
#input, url => embed
vectors = test.run_from_url(url="https://picsum.photos/800", h_splits=3, v_splits=3)
#print
print(vectors)
Contributing
- Fork the repository on GitHub.
- Clone the forked repository to your machine.
- Create a new branch with an appropriate name.
- Make your changes and commit with a meaningful commit message.
- Push your changes to your forked repository.
- Create a Pull Request against the original repository.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Todo
- Make captions using any of the following: openclip G, OpenCLIP G or siglip L or EVA G
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
clipq-0.0.4.tar.gz
(4.5 kB
view hashes)
Built Distribution
clipq-0.0.4-py3-none-any.whl
(4.5 kB
view hashes)