Pytorch implementation of the CLIP guided loss.
Project description
pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation.
A simple library that implements CLIP guided loss in PyTorch.
Install package
pip install pytorch_clip_guided_loss
Install the latest version
pip install --upgrade git+https://github.com/bes-dev/pytorch_clip_guided_loss.git
Features
- The library supports multiple prompts (images or texts) as targets for optimization.
- The library automatically detects the language of the input text, and multilingual translate it via google translate.
- The library supports the original CLIP model by OpenAI and ruCLIP model by SberAI.
Usage
Simple code
import torch
from pytorch_clip_guided_loss import get_clip_guided_loss
loss_fn = get_clip_guided_loss(clip_type="ruclip", input_range = (-1, 1)).eval().requires_grad_(False)
# text prompt
loss_fn.add_prompt(text="text description of the what we would like to generate")
# image prompt
loss_fn.add_prompt(image=torch.randn(1, 3, 224, 224))
# variable
var = torch.randn(1, 3, 224, 224).requires_grad_(True)
loss = loss_fn(image=var)["loss"]
loss.backward()
print(var.grad)
VQGAN-CLIP
We provide our tiny implementation of the VQGAN-CLIP pipeline for image generation as an example of the usage of our library. To start using our implementation of the VQGAN-CLIP please follow by documentation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for pytorch_clip_guided_loss-2021.12.2.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 41c63a938110fde3aa3e1dae1cf86830976ce79b8f2fcdefb107368bf0a226a2 |
|
MD5 | 790f9da592c4df5458463c8b5c3f78ba |
|
BLAKE2b-256 | 4dbafefee9f0d0da4ae9953b4835154f8ed8bf28d17e20109139eac8215bbf14 |