Code for Conditional Variational Diffusion Models
Project description
Conditional Variational Diffusion Models
Diffusion models have become popular for their ability to solve complex problems where hidden information needs to be estimated from observed data. Among others, their use is popular in image generation tasks. These models rely on a key hyperparameter of the variance schedule that impacts how well they learn, but recent work shows that allowing the model to automatically learn this hyperparameter can improve both performance and efficiency. Our CVDM package implements Conditional Variational Diffusion Models (CVDM) as described in the paper that build on this idea, with the addition of Zero-Mean Diffusion (ZMD), a technique that enhances performance in certain imaging tasks, aiming to make these approaches more accessible to researchers.
Where to get the data?
The datasets that we are using are available online:
- BioSR, the data that we are using has been transformed to .npy files
- ImageNet from ILSVRC2012
- HCOCO - only used in model evaluation
It is assumed that for:
- BioSR super-resolution task, data can be found in the directory specified as dataset_path in configs/biosr.yaml, in two files, x.npy (input) and y.npy (ground truth)
- BioSR phase task, data can be found in the directory specified as dataset_path in configs/biosr_phase.yaml, in one file, y.npy (ground truth). Input to the model will be generated based on the ground truth.
- ImageNet super-resolution task, data can be found in the directory specified as dataset_path in configs/imagenet_sr.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.
- ImageNet phase task, data can be found in the directory specified as dataset_path in configs/imagenet_phase.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.
- HCOCO phase evaluation task, data can be found in the directory specified as dataset_path in configs/hcoco_phase.yaml as a collection of JPEG files. Input to the model will be generated based on the ground truth.
How to prepare environment?
We provide a Dockerfile to prepare the environment. Run the following code in the root of this repository:
docker build -t my-image .
docker run -it my-image
Inside the image run:
eval "$(micromamba shell hook --shell bash)"
micromamba activate cvdm
If you encounter issues with cupy installation (required only for the phase tasks) such as these, you can modify the cvdm/utils/phase_utils.py
to use pure numpy.
How to run the training code?
- Download the data or use the sample data available in the data/ directory. The sample data is a fraction of the ImageNet dataset and can be used with configs
imagenet_sr_sample.yaml
orimagenet_phase_sample.yaml
. You can also use your own data as long as it is in ".npy" format. To do so, use the task type "other". - Modify the config in
configs/
directory with the path to the data you want to use and the directory for outputs. For the description of each parameter, check the documentation incvdm/configs/
files. - Run the code from the root directory:
python scripts/train.py --config-path $PATH_TO_CONFIG --neptune-token $NEPTUNE_TOKEN
.
--neptune-token
argument is optional.
How to run the evaluation code?
- Download the data.
- Modify the config in
configs/
directory with the path to the data you want to use and the directory for outputs. - Run the code from the root directory:
python scripts/eval.py --config-path $PATH_TO_CONFIG --neptune-token $NEPTUNE_TOKEN
.
--neptune-token
argument is optional.
How to contribute?
To contribute to the software or seek support, please leave an issue or pull request.
License
This repository is released under the MIT License (refer to the LICENSE file for details).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cvdm-0.1.0.tar.gz
.
File metadata
- Download URL: cvdm-0.1.0.tar.gz
- Upload date:
- Size: 19.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e02e3035546baca2b773d2139bc184f80d04afca477f64953b566af82b8673c4 |
|
MD5 | 6a6edc500af81cb0d65b1ece3e200321 |
|
BLAKE2b-256 | affc630660e4d8e6f099fcddd7c069ec316cfcbbb250c039d4efcc129ea7a57e |
File details
Details for the file cvdm-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: cvdm-0.1.0-py3-none-any.whl
- Upload date:
- Size: 26.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 30474e2d33bfabcb9488c2210f83c93ad05435938c264c1b02262dada1a9673c |
|
MD5 | 4de763b0a8576101a6bc1b03ac1e9dd0 |
|
BLAKE2b-256 | 6aee5d00cfacb86a254d78b0ec037be23139c9d75c40c666e1353cc7c2cbb4bb |