VITVQGAN - VECTOR-QUANTIZED IMAGE MODELING WITH IMPROVED VQGAN
Project description
VIT-VQGAN
This is an unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch. ViT-VQGAN is a simple ViT-based Vector Quantized AutoEncoder while RQ-VAE introduces a new residual quantization scheme. Further details can be viewed in the papers
Installation
pip install vitvqgan
Training
Train the model:
python -m vitvqgan.train_vim
You can add more options too:
python -m vitvqgan.train_vim -c imagenet_vitvq_small -lr 0.00001 -e 10
It uses Imagenette
as the training dataset for demo purpose, to change it, modify dataloader init file.
Inference:
- download checkpoints from above in mbin folder
- Run the following command:
python -m vitvqgan.demo_recon
Checkpoints
Acknowledgements
The repo is modified from here with updates to latest dependencies and to be easily run in consumer-grade GPU for learning purpose.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for vitvqgan-0.0.1.dev1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 86ec8a289cbbc90cefbd6b6c8cc6eab7b8d41831df82f2950ae48dd77341edfc |
|
MD5 | 3fe5b3f3fd26a72eaa1c1d0a64254931 |
|
BLAKE2b-256 | f6564c5b693749692093acc7a391cc9ae8799ecb7e363aac069effc37989a835 |