Skip to main content

naifu is designed for training generative models with various configurations and features.

Project description

Naifu

naifu-diffusion (or naifu) is designed for training generative models with various configurations and features. The code in the main branch of this repository is under development and subject to change as new features are added.

Other branches in the repository include:

  • sgm - Uses the sgm to train SDXL models.
  • main-archived - Contains the original naifu-diffusion code for training Stable Diffusion 1.x models.

Installation

To install the necessary dependencies, simply run:

git clone https://github.com/mikubill/naifu-diffusion
pip install -r requirements.txt

Usage

You can train the image model using different configurations by running the trainer.py script with the appropriate configuration file.

python trainer.py --config config/<config_file>
python trainer.py config/<config_file>

Replace <config_file> with one of the available configuration files listed below.

Available Configurations

Choose the appropriate configuration file based on training objectives and environment.

Train SDXL (Stable Diffusion XL) model

# stabilityai/stable-diffusion-xl-base-1.0
python trainer.py config/train.yaml

Train SDXL refiner (Stable Diffusion XL refiner) model

# stabilityai/stable-diffusion-xl-refiner-1.0
python trainer.py config/train_refiner.yaml

Train original Stable Diffusion 1.4 or 1.5 model

# runwayml/stable-diffusion-v1-5
# Note: will save in diffusers format
python trainer.py config/train_sd15.yaml

Train SDXL model with diffusers backbone

# stabilityai/stable-diffusion-xl-base-1.0
# Note: will save in diffusers format
python trainer.py config/train_diffusers.yaml

Train SDXL model with LyCORIS.

# Based on the work available at KohakuBlueleaf/LyCORIS
pip install lycoris_lora toml
python trainer.py config/train_lycoris.yaml

Use fairscale strategy for distributed data parallel sharded training

pip install fairscale
python trainer.py config/train_fairscale.yaml

Train SDXL model with Diffusion DPO
Paper: Diffusion Model Alignment Using Direct Preference Optimization (arxiv:2311.12908)

# dataset: yuvalkirstain/pickapic_v2
# Be careful tuning the resolution and dpo_betas!
# will save in diffusers format
python trainer.py config/train_dpo_hfdataset.yaml

Train Pixart-Alpha model
Paper: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis (arxiv:2310.00426)

# PixArt-alpha/PixArt-XL-2-1024-MS
python trainer.py config/train_pixart.yaml

Train SDXL-LCM model
Paper: Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference (arxiv:2310.04378)

# PixArt-alpha/PixArt-XL-2-1024-MS
python trainer.py config/train_lcm.yaml

Preparing Datasets

Each configuration file may have different dataset requirements. Make sure to check the specific configuration file for any dataset specifications or requirements.

You can use your dataset directly for training. Simply point the configuration file to the location of your dataset. If you want to reduce the VRAM usage during training, you can encode your dataset to latents using the encode_latents.py script.

# prepare images in input_path
python encode_latents.py -i <input_path> -o <encoded_path>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

naifu-0.1.3.tar.gz (74.2 kB view hashes)

Uploaded Source

Built Distribution

naifu-0.1.3-py3-none-any.whl (91.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page