StableFused is a toy library to experiment with Stable Diffusion inspired by 🤗 diffusers and various other sources!
Project description
StableFused
StableFused is a toy library to experiment with Stable Diffusion inspired by 🤗 diffusers and various other sources! One of the main reasons I'm working on this project is to learn more about Stable Diffusion, and generative models in general. It is my current area of research at university.
Installation
It is recommended to use a virtual environment. You can use venv or conda to create one.
python -m venv venv
source venv/bin/activate
For usage, install the package from PyPI.
pip install stablefused
For development, fork the repository, clone it and install the package in editable mode.
git clone https://github.com/<YOUR_USERNAME>/stablefused.git
cd stablefused
pip install -e ".[dev]"
Usage
Checkout the examples folder for notebooks 🥰
Contributing
Contributions are welcome! Note that this project is not a serious implementation for training/inference/fine-tuning diffusion models. It is a toy library. I am working on it for fun and experimentation purposes (and because I'm too stupid to modify large codebases and understand what's going on).
As I'm not an expert in this field, I will have probably made a lot of mistakes. If you find any, please open an issue or a PR. I'll be happy to learn from you!
Acknowledgements/Resources
The following sources have been very helpful to me in understanding Stable Diffusion. I highly recommend you to check them out!
- 🤗 diffusers
- Karpathy's gist on latent walking
- Nateraw's stable-diffusion-videos
- 🤗 Annotated Diffusion Blog
- Keras CV
- Lillian Weng's Blogs
- Emilio Dorigatti's Blogs
- The AI Summer Diffusion Models Blog
Results
All of the inference process for below results was done using the Stable Diffusion v1.5 model.
Visualization of diffusion process
Refer to the notebooks for more details and enjoy the denoising process!
Text to Image
These results are generated using the Text to Image notebook.
Image to Image
These results are generated using the Image to Image notebook.
Source Image | Denoising Diffusion Process | |
---|---|---|
The Renaissance Astronaut | High quality and colorful photo of Robert J Oppenheimer, father of the atomic bomb, in a spacesuit, galaxy in the background, universe, octane render, realistic, 8k, bright colors | Stylistic photorealisic photo of Margot Robbie, playing the role of astronaut, pretty, beautiful, high contrast, high quality, galaxies, intricate detail, colorful, 8k |
Your browser does not support the video tag. |
PS
The results from Image to Image Diffusion don't seem very great from my experimentation. It might be some kind of bug in my implementation, which I'll have to look into later...Understanding the effect of Guidance Scale
Guidance scale is a value inspired by the paper Classifier-Free Diffusion Guidance. The explanation of how CFG works is out-of-scope here, but there are many online sources where you can read about it (linked below).
- Guidance: a cheat for diffusion models
- Diffusion Models, DDPMs, DDIMs and CFG
- Classifier-Free Guidance Scale
In short, guidance scale is a value that controls the amount of "guidance" used in the diffusion process. That is, the higher the value, the more closely the diffusion process follows the prompt. A lower guidance scale allows the model to be more creative, and work slightly different from the exact prompt. After a certain threshold maximum value, the results start to get worse, blurry and noisy.
Guidance scale values, in practice, are usually in the range 6-15, and the default value of 7.5 is used in many inference implementations. However, manipulating it can lead to some very interesting results. It also only makes sense when it is set to 1.0 or higher, which is why many implementations use a minimum value of 1.0.
But... what happens when we set guidance scale to 0? Or negative? Let's find out!
When you use a negative value for the guidance scale, the model will try to generate images that are the opposite of what you specify in the prompt. For example, if you prompt the model to generate an image of an astronaut, and you use a negative guidance scale, the model will try to generate an image of everything but an astronaut. This can be a fun way to generate creative and unexpected images (sometimes NSFW or absolute horrendous stuff, if you are not using a safety-checker model - which is the case with StableFused).
Results
The original images produced are too large to display in high quality here. You can find them in my Drive. These images are compressed from ~30 MB to ~6 MB in order for GitHub to accept uploads.
Effect of Guidance Scale on Different Prompts
Effect of Guidance Scale on Different Prompts |
---|
Each image is sampled with the same prompt and seed to ensure only the guidance scale plays a role. Column 1: Artistic image, very detailed cute cat, cinematic lighting effect, cute, charming, fantasy art, digital painting, photorealistic Column 2: A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k Column 3: A grand city in the year 2100, atmospheric, hyper realistic, 8k, epic composition, cinematic, octane render Column 4: Starry Night, painting style of Vincent van Gogh, Oil paint on canvas, Landscape with a starry night sky, dreamy, peaceful |
Your browser does not support the video tag. |
Effect of Guidance Scale with increased number of inference steps
Effect of Guidance Scale with increased number of inference steps |
---|
Columns have number of inference steps set to 3, 6, 12, 20, 25. Prompt: Photorealistic illustration of a mystical alien creature, magnificent, strong, atomic, tyrannic, predator, unforgiving, full-body image |
Your browser does not support the video tag. |
Your browser does not support the video tag. |
Latent Walk
Generative models, like the ones used in Stable Diffusion, learn a latent representation of the world. A latent representation is a low-dimensional vector space embedding of the world. In the case of SD, this latent representation is learnt by training on text-image pairs. This representation is used to generate samples given a prompt and a random noise vector. The model tries to predict and remove noise from the random noise vector, while also aligning the vector to the prompt. This results in some interesting properties of the latent space.
Stable Diffusion models (atleast, the models used here) learn two latent representations - one of the NLP space for prompts, and one of the image space. These latent representations are continuous. If we choose two vectors in the latent space to sample from, we get two different/similar images depending on how different the chosen vectors are. This is the basis of latent walking. We can choose two vectors in the latent space, and sample from the latent path between them. This results in a smooth transition between the two images.
Similar Image Generation by sampling latent space
The results below show just how information rich the latent space of these stable diffusion models are.
Source Image | Latent Walks |
---|---|
Large futuristic mechanical robot in the foreground of a baroque-style battle scene, photorealistic, high quality, 8k | |
Generating Latent Walk videos
Generating Latent Walk videos |
---|
Prompt 1: A dog chasing a cat in a thrilling backyard scene, high quality and photorealistic Prompt 2: A determined dog in hot pursuit, with stunning realism, octane render Prompt 3: A thrilling chase, dog behind the cat, octane render, exceptional realism and quality Prompt 4: The exciting moment of a cat outmaneuvering a chasing dog, high-quality and photorealistic detail Prompt 5: A clever cat escaping a determined dog and soaring into space, rendered with octane render for stunning realism Prompt 6: The cat's escape into the cosmos, leaving the dog behind in a scene,high quality and photorealistic style |
Your browser does not support the video tag. |
Note that these results aren't very good. I tried different seeds but for this story, I couldn't make a great video. I did try some other prompts and got better results, but I like this story so I'm sticking with it 🤓 You can improve the results by using better prompts and increasing the number of interpolation and inference steps.
Future
At the moment, I'm not sure if I'll continue to expand on this project, but if I do, here are some things I have in mind (in no particular order, and for documentation purposes):
- Add support for more techniques of inference - explore new sampling techniques and optimize diffusion paths
- Implement and stay up-to-date with the latest papers in the field
- Removing 🧨 diffusers as a dependency by implementing all required components myself
- Create user-friendly web demos or GUI tools to make experimentation easier.
- Add LoRA, training and fine-tuning support
- Improve codebase, documentation and tests
- Improve support for not only Stable Diffusion, but other diffusion techniques, involving but not limited to audio, video, etc.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file stablefused-0.1.6.tar.gz
.
File metadata
- Download URL: stablefused-0.1.6.tar.gz
- Upload date:
- Size: 75.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5be662b9d0a3391ed23c0251edeea7d3a7b7105b79a525f5039c5f6650a9df63 |
|
MD5 | 2036f5fe1639947819ba9f3e40ea5818 |
|
BLAKE2b-256 | 51e02083ced3f36311ba3c54d07a9a60f26bf6419b58bb1eded8389d05606772 |
File details
Details for the file stablefused-0.1.6-py3-none-any.whl
.
File metadata
- Download URL: stablefused-0.1.6-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 88779e9095628c420704d1739e7beae50e42f9a459e446ccff215bf13f956846 |
|
MD5 | e642c620722d27d8e648e2a2d39ff94b |
|
BLAKE2b-256 | 2eedb2062602013cc45fb622d5cd50cbef10fee4ca41ddfd85189526243c0b13 |