Skip to main content

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts.

Project description

stable-diffusion-videos

Try it yourself in Colab: Open In Colab

Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"

https://user-images.githubusercontent.com/32437151/188721341-6f28abf9-699b-46b0-a72e-fa2a624ba0bb.mp4

Installation

pip install stable_diffusion_videos

Usage

Check out the examples folder for example scripts 👀

Making Videos

Note: For Apple M1 architecture, use torch.float32 instead, as torch.float16 is not available on MPS.

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=3,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    name='animals_test',        # Subdirectory of output_dir where images/videos will be saved
    guidance_scale=8.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Making Music Videos

New! Music can be added to the video by providing a path to an audio file. The audio will inform the rate of interpolation so the videos move to the beat 🎶

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

# Seconds in the song.
audio_offsets = [146, 148]  # [Start, end]
fps = 30  # Use lower values for testing (5 or 10), higher values for better quality (30 or 60)

# Convert seconds to frames
num_interpolation_steps = [(b-a) * fps for a, b in zip(audio_offsets, audio_offsets[1:])]

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=num_interpolation_steps,
    audio_filepath='audio.mp3',
    audio_start_sec=audio_offsets[0],
    fps=fps,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    guidance_scale=7.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Using the UI

from stable_diffusion_videos import StableDiffusionWalkPipeline, Interface
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

interface = Interface(pipeline)
interface.launch()

Credits

This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.

Contributing

You can file any issues/feature requests here

Enjoy 🤗

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stable_diffusion_videos-0.9.1.tar.gz (42.2 kB view details)

Uploaded Source

Built Distribution

stable_diffusion_videos-0.9.1-py3-none-any.whl (42.0 kB view details)

Uploaded Python 3

File details

Details for the file stable_diffusion_videos-0.9.1.tar.gz.

File metadata

File hashes

Hashes for stable_diffusion_videos-0.9.1.tar.gz
Algorithm Hash digest
SHA256 164c0d9268c2f823b145db6b25f621f3cb6bc1f623d4d5489701b810e74eaff6
MD5 f9067d228f2d52c9ce3ac303dbbe186a
BLAKE2b-256 a89151943e185fa21220888294213fe327ec99ae3b68405c0aa65779954baf71

See more details on using hashes here.

File details

Details for the file stable_diffusion_videos-0.9.1-py3-none-any.whl.

File metadata

File hashes

Hashes for stable_diffusion_videos-0.9.1-py3-none-any.whl
Algorithm Hash digest
SHA256 299ebc6ed2d5097f15c85d537a009c1f8e0e381bf8432f55fe1f7a7378ce8b74
MD5 bf1756472297d72ea329e322c8e6dd94
BLAKE2b-256 48d1661770a1508e320c48129f5c78987fc8e6ef05f278c42b1e76a1320b3d79

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page