Skip to main content

Real-time AI video generation and streaming tool with WebRTC support

Project description

Hanzo Live

Discord

longlivedemo1

Hanzo Live is a tool for running and customizing real-time, interactive generative AI pipelines and models.

🚧 Here be dragons! This project is currently in alpha. 🚧

Features

  • Autoregressive video diffusion models
  • WebRTC real-time streaming
  • Low latency async video processing pipelines
  • Interactive UI with text prompting, model parameter controls and video/camera/text input modes

...and more to come!

System Requirements

Hanzo Live currently supports the following operating systems:

  • Linux
  • Windows
  • macOS (Apple Silicon with MLX support)

GPU Requirements

NVIDIA GPUs (Linux/Windows):

  • Requires a Nvidia GPU with >= 24GB VRAM
  • We recommend a driver that supports CUDA >= 12.8
  • RTX 3090/4090/5090 recommended (newer generations will support higher FPS throughput and lower latency)
  • If you do not have access to a GPU with these specs, we recommend installing on Runpod

Apple Silicon (macOS):

  • Supported on M1/M2/M3/M4 Macs with unified memory
  • Automatically uses MLX (Apple's machine learning framework) with Metal backend
  • No special flags needed - Apple Silicon acceleration is auto-detected

Install

Manual Installation

Install uv which is needed to run the server and Node.js which is needed to build the frontend.

Clone

git clone git@github.com:hanzoai/live.git
cd live

Build

This will build the frontend files which will be served by the Hanzo Live server.

uv run build

Run

[!IMPORTANT] If you are running the server in a cloud environment, make sure to read the Firewalls section.

This will start the server and on the first run will also download required model weights. The default directory where model weights are stored is ~/.hanzo-live/models.

uv run hanzo-live

The application will automatically detect your hardware:

  • NVIDIA GPU (Linux/Windows) → Uses CUDA acceleration
  • Apple Silicon (macOS) → Uses MLX/Metal acceleration
  • CPU fallback → Use --cpu flag for testing without GPU

After the server starts up, the frontend will be available at http://localhost:8000.

Runpod

Use our RunPod template to quickly set up Hanzo Live in the cloud. This is the easiest way to get started if you don't have a compatible local GPU.

[!IMPORTANT] Follow the instructions in Firewalls to get a HuggingFace access token.

Deployment Steps:

  1. Click the Runpod template link: Template

  2. Select your GPU: Choose a GPU that meets the system requirements.

  3. Configure environment variables:

    • Click "Edit Template"
    • Add an environment variable:
      • Set name to HF_TOKEN
      • Set value to your HuggingFace access token
    • Click "Set Overrides"
  4. Deploy: Click "Deploy On-Demand"

  5. Access the app: Wait for deployment to complete, then open the app at port 8000

The template will automatically download model weights and configure everything needed.

Firewalls

If you run Hanzo Live in a cloud environment with restrictive firewall settings (eg. Runpod), Hanzo Live supports using TURN servers to establish a connection between your browser and the streaming server.

The easiest way to enable this feature is to create a HuggingFace account and a read access token. You can then set an environment variable before starting Hanzo Live:

# You should set this to your HuggingFace access token
export HF_TOKEN=your_token_here

When you start Hanzo Live, it will automatically use Cloudflare's TURN servers and you'll have 10GB of free streaming per month:

uv run hanzo-live

Contributing

Read the contribution guide.

License

The alpha version of this project is licensed under CC BY-NC-SA 4.0.

You may use, modify, and share the code for non-commercial purposes only, provided that proper attribution is given.

We will consider re-licensing future versions under a more permissive license if/when non-commercial dependencies are refactored or replaced.


Copyright © 2025 Hanzo AI Inc. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hanzo_live-0.1.0a1.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hanzo_live-0.1.0a1-py3-none-any.whl (1.8 MB view details)

Uploaded Python 3

File details

Details for the file hanzo_live-0.1.0a1.tar.gz.

File metadata

  • Download URL: hanzo_live-0.1.0a1.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.23

File hashes

Hashes for hanzo_live-0.1.0a1.tar.gz
Algorithm Hash digest
SHA256 a77311a372870247ad5a839560998a65851c001697d15aedc5855a6e5a562a4a
MD5 497b79db77a9e3d19d69849416031039
BLAKE2b-256 2bf9f02d8b57954ad089bda8fe067b2df9d88330f2dd644b158fd78df1b316d7

See more details on using hashes here.

File details

Details for the file hanzo_live-0.1.0a1-py3-none-any.whl.

File metadata

File hashes

Hashes for hanzo_live-0.1.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 aa849f0d4a5c3eebd2ee5d2d6d4efdd283e58ee101800fa4df5cca2332275f68
MD5 6230f933c418c120f302ae586ededc28
BLAKE2b-256 bab3feb42da8dfe48a75bca0a77b8636d8237e58d6de75bee702ec5aab0a4386

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page