Skip to main content

FastVideo

Project description

| Documentation | Quick Start | Weekly Dev Meeting | 🟣💬 Slack | 🟣💬 WeChat |

FastVideo is a unified post-training and inference framework for accelerated video generation.

NEWS

More

Key Features

FastVideo has the following features:

  • End-to-end post-training support for bidirectional and autoregressive models:
    • Support full finetuning and LoRA finetuning for state-of-the-art open video DiTs
    • Data preprocessing pipeline for video, image, and text data
    • Distribution Matching Distillation (DMD2) stepwise distillation.
    • Sparse attention with Video Sparse Attention
    • Sparse distillation to achineve >50x denoising speedup
    • Scalable training with FSDP2, sequence parallelism, and selective activation checkpointing.
    • Causal distillation through Self-Forcing
    • See this page for full list of supported models and recipes.
  • State-of-the-art performance optimizations for inference
    • Sequence Parallelism for distributed inference
    • Multiple state-of-the-art attention backends
    • User-friendly CLI and Python API
    • See this page for full list of supported optimizations.
  • Diverse hardware and OS support
    • Support H100, A100, 4090
    • Support Linux, Windows, MacOS
    • See this page for full list of supported hardware and OS.

Getting Started

We recommend using an environment manager such as Conda to create a clean environment:

# Create and activate a new conda environment
conda create -n fastvideo python=3.12
conda activate fastvideo

# Install FastVideo
pip install fastvideo

Please see our docs for more detailed installation instructions.

Sparse Distillation

For our sparse distillation techniques, please see our distillation docs and check out our blog.

See below for recipes and datasets:

Model Sparse Distillation Dataset
FastWan2.1-T2V-1.3B Recipe FastVideo Synthetic Wan2.1 480P
FastWan2.1-T2V-14B-Preview Coming soon! FastVideo Synthetic Wan2.1 720P
FastWan2.2-TI2V-5B Recipe FastVideo Synthetic Wan2.2 720P

Inference

Generating Your First Video

Here's a minimal example to generate a video using the default settings. Make sure VSA kernels are installed. Create a file called example.py with the following code:

import os
from fastvideo import VideoGenerator

def main():
    os.environ["FASTVIDEO_ATTENTION_BACKEND"] = "VIDEO_SPARSE_ATTN"

    # Create a video generator with a pre-trained model
    generator = VideoGenerator.from_pretrained(
        "FastVideo/FastWan2.1-T2V-1.3B-Diffusers",
        num_gpus=1,  # Adjust based on your hardware
    )

    # Define a prompt for your video
    prompt = "A curious raccoon peers through a vibrant field of yellow sunflowers, its eyes wide with interest."

    # Generate the video
    video = generator.generate_video(
        prompt,
        return_frames=True,  # Also return frames from this call (defaults to False)
        output_path="my_videos/",  # Controls where videos are saved
        save_video=True
    )

if __name__ == '__main__':
    main()

Run the script with:

python example.py

For a more detailed guide, please see our inference quick start.

Other docs:

Distillation and Finetuning

Awesome work using FastVideo or our research projects

  • SGLang: SGLang's diffusion inference functionality is based on a fork of FastVideo on Sept. 24, 2025. Star

  • DanceGRPO: A unified framework to adapt Group Relative Policy Optimization (GRPO) to visual generation paradigms. Code based on FastVideo. Star

  • SRPO: A method to directly align the full diffusion trajectory with fine-grained human preference. Code based on FastVideo. Star

  • DCM: Dual-expert consistency model for efficient and high-quality video generation. Code based on FastVideo. Star

  • Hunyuan Video 1.5: A leading lightweight video generation model, where they proposed SSTA based on Sliding Tile Attention. Star

  • Kandinsky-5.0: A family of diffusion models for video & image generation, where their NABLA attention includes a Sliding Tile Attention branch. Star

  • LongCat Video: A foundational video generation model with 13.6B parameters with block-sparse attention similar to Video Sparse Attention. Star

🤝 Contributing

We welcome all contributions. Please check out our guide here. See details in development roadmap.

Acknowledgement

We learned and reused code from the following projects:

We thank MBZUAI, Anyscale, and GMI Cloud for their support throughout this project.

Citation

If you find FastVideo useful, please considering citing our work:

@software{fastvideo2024,
  title        = {FastVideo: A Unified Framework for Accelerated Video Generation},
  author       = {The FastVideo Team},
  url          = {https://github.com/hao-ai-lab/FastVideo},
  month        = apr,
  year         = {2024},
}

@article{zhang2025vsa,
  title={Vsa: Faster video diffusion with trainable sparse attention},
  author={Zhang, Peiyuan and Chen, Yongqi and Huang, Haofeng and Lin, Will and Liu, Zhengzhong and Stoica, Ion and Xing, Eric and Zhang, Hao},
  journal={arXiv preprint arXiv:2505.13389},
  year={2025}
}

@article{zhang2025fast,
  title={Fast video generation with sliding tile attention},
  author={Zhang, Peiyuan and Chen, Yongqi and Su, Runlong and Ding, Hangliang and Stoica, Ion and Liu, Zhengzhong and Zhang, Hao},
  journal={arXiv preprint arXiv:2502.04507},
  year={2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastvideo-0.1.7.tar.gz (723.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastvideo-0.1.7-py3-none-any.whl (957.8 kB view details)

Uploaded Python 3

File details

Details for the file fastvideo-0.1.7.tar.gz.

File metadata

  • Download URL: fastvideo-0.1.7.tar.gz
  • Upload date:
  • Size: 723.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for fastvideo-0.1.7.tar.gz
Algorithm Hash digest
SHA256 defb0f8e1c2e41153b223f19ddeb94aa112136f111848694468165ddcfeba8d0
MD5 c28ec7b88a32e6c37cfcfbc77db48bb5
BLAKE2b-256 5b9ac081c600c0d2e49c40a9b9e5b80ee2477e4f398e8a0a74a7f513aa55fbc9

See more details on using hashes here.

Provenance

The following attestation bundles were made for fastvideo-0.1.7.tar.gz:

Publisher: fastvideo-publish.yml on hao-ai-lab/FastVideo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file fastvideo-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: fastvideo-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 957.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for fastvideo-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 ce0fa1060c738bb89eeca91abb8b90ad5cdab71461bf8c1bd305244bd4f5d7fc
MD5 72430c30060330163178692a8ee1973a
BLAKE2b-256 fbbefdf5993ffce15deb78de7608d1ba373a7f7856d8329b4138285497edf452

See more details on using hashes here.

Provenance

The following attestation bundles were made for fastvideo-0.1.7-py3-none-any.whl:

Publisher: fastvideo-publish.yml on hao-ai-lab/FastVideo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page