Skip to main content

Generate a prompt from an image

Project description

clip-interrogator

Want to figure out what a good prompt might be to create new images like an existing one? The CLIP Interrogator is here to get you answers!

Run it!

🆕 Now available as a Stable Diffusion Web UI Extension! 🆕


Run Version 2 on Colab, HuggingFace, and Replicate!

Open In Colab Generic badge Replicate Lambda


Version 1 still available in Colab for comparing different CLIP models

Open In Colab

About

The CLIP Interrogator is a prompt engineering tool that combines OpenAI's CLIP and Salesforce's BLIP to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like Stable Diffusion on DreamStudio to create cool art!

Using as a library

Create and activate a Python virtual environment

python3 -m venv ci_env
(for linux  ) source ci_env/bin/activate
(for windows) .\ci_env\Scripts\activate

Install with PIP

# install torch with GPU support for example:
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117

# install clip-interrogator
pip install clip-interrogator==0.5.5

You can then use it in your script

from PIL import Image
from clip_interrogator import Config, Interrogator
image = Image.open(image_path).convert('RGB')
ci = Interrogator(Config(clip_model_name="ViT-L-14/openai"))
print(ci.interrogate(image))

CLIP Interrogator uses OpenCLIP which supports many different pretrained CLIP models. For the best prompts for Stable Diffusion 1.X use ViT-L-14/openai for clip_model_name. For Stable Diffusion 2.0 use ViT-H-14/laion2b_s32b_b79k

Configuration

The Config object lets you configure CLIP Interrogator's processing.

  • clip_model_name: which of the OpenCLIP pretrained CLIP models to use
  • cache_path: path where to save precomputed text embeddings
  • download_cache: when True will download the precomputed embeddings from huggingface
  • chunk_size: batch size for CLIP, use smaller for lower VRAM
  • quiet: when True no progress bars or text output will be displayed

On systems with low VRAM you can call config.apply_low_vram_defaults() to reduce the amount of VRAM needed (at the cost of some speed and quality). The default settings use about 6.3GB of VRAM and the low VRAM settings use about 2.7GB.

See the run_cli.py and run_gradio.py for more examples on using Config and Interrogator classes.

Ranking against your own list of terms

from clip_interrogator import Config, Interrogator, LabelTable, load_list
from PIL import Image

ci = Interrogator(Config(blip_model_type=None))
image = Image.open(image_path).convert('RGB')
table = LabelTable(load_list('terms.txt'), 'terms', ci)
best_match = table.rank(ci.image_to_features(image), top_count=1)[0]
print(best_match)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clip-interrogator-0.6.0.tar.gz (788.9 kB view details)

Uploaded Source

Built Distribution

clip_interrogator-0.6.0-py3-none-any.whl (787.8 kB view details)

Uploaded Python 3

File details

Details for the file clip-interrogator-0.6.0.tar.gz.

File metadata

  • Download URL: clip-interrogator-0.6.0.tar.gz
  • Upload date:
  • Size: 788.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for clip-interrogator-0.6.0.tar.gz
Algorithm Hash digest
SHA256 e7942372fe9b96181881f7083e3179de746e59b0e3c4199fb3e3e19bef421693
MD5 0494078a1cf78911eb329004c1e9f838
BLAKE2b-256 23d12f0f61c5cbaea3d1480f2eb2709f89d64d62976e9634e7eeaac2e2c03ba2

See more details on using hashes here.

File details

Details for the file clip_interrogator-0.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for clip_interrogator-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cd7c6bf9db170f005b4179e943fc1658aa0f8eebcc75ab3428b0a992aaeabd1c
MD5 af6925a6ed62ab6ec82bab0d673f15f1
BLAKE2b-256 3079a75e9129809368b3e3d9b9bc803230ac1cba7d690338f7b0c3ad46107fa3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page