Skip to main content

A universal Stable-Diffusion toolbox

Project description

HCP-Diffusion

📘中文说明

Introduction

HCP-Diffusion is a toolbox for stable diffusion models based on diffusers. It facilitates flexiable configurations and component support for training, in comparison with webui and sd-scripts.

This toolbox supports colossal-AI, which can significantly reduce GPU memory usage.

HCP-Diffusion can unify existing training methods for text-to-image generation (e.g., Prompt-tuning, Textual Inversion, DreamArtist, Fine-tuning, DreamBooth, LoRA, ControlNet, etc) and model structures through a single .yaml configuration file.

The toolbox has also implemented an upgraded version of DreamArtist with LoRA, named DreamArtist++, for one-shot text-to-image generation. Compared to DreamArtist, DreamArtist++ is more stable with higher image quality and generation controllability, and faster training speed.

Features

  • Layer-wise LoRA (with Conv2d)
  • Layer-wise fine-tuning
  • Layer-wise model ensemble
  • Prompt-tuning with multiple words
  • DreamArtist and DreamArtist++
  • Aspect Ratio Bucket (ARB) with automatic clustering
  • Priori data
  • Image attention mask
  • Word attention multiplier
  • Custom words that occupy multiple words
  • Maximum sentence length expansion
  • colossal-AI
  • xformers for unet and text-encoder
  • CLIP skip
  • Tag shuffle and dropout
  • safetensors support

Install

Install with pip:

pip install hcpdiff
# Start a new project and make initialization
hcpinit

Install from source:

git clone https://github.com/7eu7d7/HCP-Diffusion.git
cd HCP-Diffusion
pip install -e .
# Modified based on this project or start a new project and make initialization
hcpinit

User guidance

Training:

# with accelerate
accelerate launch -m hcpdiff.train_ac --cfg cfgs/train/cfg_file.yaml
# with accelerate and only one gpu
accelerate launch -m hcpdiff.train_ac_single --cfg cfgs/train/cfg_file.yaml
# with colossal-AI
torchrun --nproc_per_node 1 -m hcpdiff.train_colo --cfg cfgs/train/cfg_file.yaml

Inference:

python -m hcpdiff.visualizer --cfg cfgs/infer/cfg.yaml pretrained_model=pretrained_model_path \
        prompt='positive_prompt' \
        neg_prompt='negative_prompt' \
        seed=42

Use xformer to reduce VRAM usage and accelerate training:

# use conda
conda install xformers -c xformers

# use pip
pip install xfromers>=0.0.17

Team

This toolbox is maintained by HCP-Lab, SYSU. More models and features are welcome to contribute to this toolbox.

Citation

@article{DBLP:journals/corr/abs-2211-11337,
  author    = {Ziyi Dong and
               Pengxu Wei and
               Liang Lin},
  title     = {DreamArtist: Towards Controllable One-Shot Text-to-Image Generation
               via Positive-Negative Prompt-Tuning},
  journal   = {CoRR},
  volume    = {abs/2211.11337},
  year      = {2022},
  doi       = {10.48550/arXiv.2211.11337},
  eprinttype = {arXiv},
  eprint    = {2211.11337},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hcpdiff-0.1.4.tar.gz (48.4 kB view hashes)

Uploaded Source

Built Distribution

hcpdiff-0.1.4-py3-none-any.whl (67.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page