Skip to main content

Modular framework for automating desktop application demo video production

Project description

Narractive

A modular Python framework for automated video production — from narration to final cut.

Narractive orchestrates the full pipeline: UI interaction (PyAutoGUI), screen recording (OBS or headless), text-to-speech narration, Mermaid diagram generation, subtitle generation, and FFmpeg assembly. Script your sequences, define narration cues, and let the framework produce polished demo videos hands-free.

Features

  • Dual recording backends: OBS WebSocket (desktop) or headless frame capture (Docker/Xvfb)
  • Multi-engine TTS narration: edge-tts (free), ElevenLabs (premium), F5-TTS (voice cloning), XTTS v2 (multilingual cloning)
  • Timeline-synchronized sequences: Narration cues paired with UI actions
  • Mermaid diagram slides: HTML + PNG via Playwright, mmdc, or mermaid.ink API (zero-dep)
  • SRT subtitle generation: WPM-based timing from narration text, multilingual defaults
  • Multilingual diagram labels: i18n base class with automatic language fallback
  • FFmpeg post-production: Quality presets (draft/final), subtitle burn, intro/outro from images, duration matching
  • Interactive calibration: Record UI element positions for pixel-perfect automation
  • Docker support: Reproducible headless production in CI/CD

Quick Start

# Install from PyPI
pip install narractive

# Or install from source
pip install -e .

# Copy and configure
cp config.template.yaml config.yaml

# Calibrate UI positions (interactive)
narractive --calibrate --config config.yaml

# Generate subtitles from narrations (multilingual)
narractive --subtitles --narrations-dir narrations/ --config config.yaml

# Generate subtitles (single language)
narractive --subtitles --lang fr --narrations-dir narrations/

# Generate diagrams
narractive --diagrams --diagrams-module my_project.diagrams.mermaid_definitions

# Record all sequences
narractive --all --sequences-package my_project.sequences --config config.yaml

# Assemble final video (fast preview)
narractive --assemble --quality draft --project-name "My Project"

# Assemble final video (publication quality)
narractive --assemble --quality final --project-name "My Project"

# Or headless (Docker)
docker compose run --rm video --all --sequences-package my_project.sequences

Architecture

narractive/
├── video_automation/              # Framework (pip-installable)
│   ├── core/                      # Generic modules
│   │   ├── app_automator.py      # PyAutoGUI + window control
│   │   ├── obs_controller.py     # OBS WebSocket 5.x
│   │   ├── frame_capturer.py     # Headless Xvfb capture
│   │   ├── narrator.py           # TTS (edge-tts/ElevenLabs/F5-TTS/XTTS v2)
│   │   ├── subtitles.py          # SRT generation from narration text
│   │   ├── timeline.py           # Narration-synchronized cues
│   │   ├── diagram_generator.py  # Mermaid → HTML/PNG (Playwright/mmdc/API)
│   │   └── video_assembler.py    # FFmpeg post-production + quality presets
│   ├── sequences/
│   │   └── base.py               # VideoSequence + TimelineSequence
│   ├── bridges/
│   │   ├── f5_tts_bridge.py      # F5-TTS subprocess bridge
│   │   └── xtts_bridge.py        # XTTS v2 (Coqui TTS) subprocess bridge
│   ├── diagrams/
│   │   ├── i18n.py               # Multilingual diagram labels
│   │   └── template.html         # Mermaid HTML template
│   ├── scripts/
│   │   ├── calibrate.py          # Interactive UI calibration
│   │   └── setup_obs.py          # OBS auto-configuration
│   └── cli.py                    # Click-based CLI
│
├── examples/
│   └── filtermate/               # Example project (QGIS plugin demo)
│
├── config.template.yaml          # Configuration template
├── Dockerfile                    # Headless Docker image
├── docker-compose.yml
└── pyproject.toml

Creating Sequences for Your App

1. Simple sequence (manual timing)

from video_automation.sequences.base import VideoSequence

class MyIntro(VideoSequence):
    name = "Introduction"
    sequence_id = "seq00"
    duration_estimate = 30.0
    obs_scene = "Main"

    def execute(self, obs, app, config):
        app.focus_app()
        app.click_at("my_button")
        app.wait(2.0)
        app.scroll_down(3)

2. Timeline sequence (narration-synchronized)

from video_automation.sequences.base import TimelineSequence
from video_automation.core.timeline import NarrationCue

class MyDemo(TimelineSequence):
    name = "Live Demo"
    sequence_id = "seq01"
    duration_estimate = 60.0

    def build_timeline(self, obs, app, config):
        return [
            NarrationCue(
                text="Welcome to the demo.",
                actions=lambda: app.wait(1.0),
                sync="during",
            ),
            NarrationCue(
                text="Let's open the settings.",
                actions=lambda: app.click_at("settings_button"),
                sync="after",
            ),
        ]

3. Multilingual diagram labels

from video_automation.diagrams.i18n import DiagramLabels

labels = DiagramLabels(
    labels={
        "server": {"fr": "Serveur", "en": "Server", "pt": "Servidor"},
        "client": {"fr": "Client", "en": "Client", "pt": "Cliente"},
    },
    titles={
        "architecture": {"fr": "Architecture", "en": "Architecture"},
    },
    default_lang="fr",
)

name = labels.l("server", "en")  # "Server"

4. Register sequences

Create my_project/sequences/__init__.py:

from my_project.sequences.seq00_intro import MyIntro
from my_project.sequences.seq01_demo import MyDemo

SEQUENCES = [MyIntro, MyDemo]

Then run:

narractive --list --sequences-package my_project.sequences
narractive --all --sequences-package my_project.sequences

Configuration

See config.template.yaml for all available options. Key sections:

Section Purpose
obs OBS WebSocket connection, scenes, output directory
app Window title, panel name, calibrated UI positions
timing Click/type/scroll delays, transition pauses
diagrams Mermaid rendering (resolution, theme, colors)
narration TTS engine, voice, speed, F5-TTS/XTTS options
subtitles SRT generation (enabled, max chars, max lines)
capture Headless frame capture (FPS, resolution, display)
output Final video encoding (resolution, fps, codec, quality preset)

TTS Engines

Engine Cost Quality Multilingual Setup
edge-tts Free Good Yes pip install edge-tts (included)
ElevenLabs Paid Excellent Yes pip install elevenlabs + API key
F5-TTS Free Excellent No Conda env + GPU recommended
XTTS v2 Free Excellent Yes pip install TTS + GPU recommended

Requirements

  • Python 3.10+
  • FFmpeg (for video assembly)
  • OBS Studio (desktop mode) or Docker (headless mode)
  • Your target application installed and running

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

narractive-2.1.0.tar.gz (73.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

narractive-2.1.0-py3-none-any.whl (78.8 kB view details)

Uploaded Python 3

File details

Details for the file narractive-2.1.0.tar.gz.

File metadata

  • Download URL: narractive-2.1.0.tar.gz
  • Upload date:
  • Size: 73.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for narractive-2.1.0.tar.gz
Algorithm Hash digest
SHA256 41ffec2c0fc4860065e975bcb33c885262f3c14b636f767e46943087f05dd535
MD5 b02b30b2b0930bb185aba3f4589d2164
BLAKE2b-256 23162c5e898e3b1b2be0b319c0d1d6c557d103c4a0bbaed110b5d3f743a67f01

See more details on using hashes here.

Provenance

The following attestation bundles were made for narractive-2.1.0.tar.gz:

Publisher: publish.yml on imagodata/narractive

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file narractive-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: narractive-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 78.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for narractive-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e0035f1042de6c348e0c4c3ee2515a04a2e3a247a943b27350d4ba6972d2a919
MD5 ee0de02c3b6f79afdb66af822420da48
BLAKE2b-256 85d42807604cb5ca0c67f9aa490a20712f67fc99125d86c28b479baf5ded5827

See more details on using hashes here.

Provenance

The following attestation bundles were made for narractive-2.1.0-py3-none-any.whl:

Publisher: publish.yml on imagodata/narractive

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page