Modular framework for automating desktop application demo video production
Project description
Narractive
A modular Python framework for automated video production — from narration to final cut.
Narractive orchestrates the full pipeline: UI interaction (PyAutoGUI), screen recording (OBS or headless), text-to-speech narration, Mermaid diagram generation, subtitle generation, and FFmpeg assembly. Script your sequences, define narration cues, and let the framework produce polished demo videos hands-free.
Features
- Dual recording backends: OBS WebSocket (desktop) or headless frame capture (Docker/Xvfb)
- Multi-engine TTS narration: edge-tts (free), ElevenLabs (premium), F5-TTS (voice cloning), XTTS v2 (multilingual cloning)
- Timeline-synchronized sequences: Narration cues paired with UI actions
- Mermaid diagram slides: HTML + PNG via Playwright, mmdc, or mermaid.ink API (zero-dep)
- SRT subtitle generation: WPM-based timing from narration text, multilingual defaults
- Multilingual diagram labels: i18n base class with automatic language fallback
- FFmpeg post-production: Quality presets (draft/final), subtitle burn, intro/outro from images, duration matching
- Interactive calibration: Record UI element positions for pixel-perfect automation
- Docker support: Reproducible headless production in CI/CD
Quick Start
# Install from PyPI
pip install narractive
# Or install from source
pip install -e .
# Copy and configure
cp config.template.yaml config.yaml
# Calibrate UI positions (interactive)
narractive --calibrate --config config.yaml
# Generate subtitles from narrations (multilingual)
narractive --subtitles --narrations-dir narrations/ --config config.yaml
# Generate subtitles (single language)
narractive --subtitles --lang fr --narrations-dir narrations/
# Generate diagrams
narractive --diagrams --diagrams-module my_project.diagrams.mermaid_definitions
# Record all sequences
narractive --all --sequences-package my_project.sequences --config config.yaml
# Assemble final video (fast preview)
narractive --assemble --quality draft --project-name "My Project"
# Assemble final video (publication quality)
narractive --assemble --quality final --project-name "My Project"
# Or headless (Docker)
docker compose run --rm video --all --sequences-package my_project.sequences
Architecture
narractive/
├── narractive/ # Framework (pip-installable)
│ ├── core/ # Generic modules
│ │ ├── app_automator.py # PyAutoGUI + window control
│ │ ├── obs_controller.py # OBS WebSocket 5.x
│ │ ├── frame_capturer.py # Headless Xvfb capture
│ │ ├── narrator.py # TTS (edge-tts/ElevenLabs/F5-TTS/XTTS v2)
│ │ ├── subtitles.py # SRT generation from narration text
│ │ ├── timeline.py # Narration-synchronized cues
│ │ ├── diagram_generator.py # Mermaid → HTML/PNG (Playwright/mmdc/API)
│ │ └── video_assembler.py # FFmpeg post-production + quality presets
│ ├── sequences/
│ │ └── base.py # VideoSequence + TimelineSequence
│ ├── bridges/
│ │ ├── f5_tts_bridge.py # F5-TTS subprocess bridge
│ │ └── xtts_bridge.py # XTTS v2 (Coqui TTS) subprocess bridge
│ ├── diagrams/
│ │ ├── i18n.py # Multilingual diagram labels
│ │ └── template.html # Mermaid HTML template
│ ├── scripts/
│ │ ├── calibrate.py # Interactive UI calibration
│ │ └── setup_obs.py # OBS auto-configuration
│ └── cli.py # Click-based CLI
│
├── examples/
│ └── filtermate/ # Example project (QGIS plugin demo)
│
├── config.template.yaml # Configuration template
├── Dockerfile # Headless Docker image
├── docker-compose.yml
└── pyproject.toml
Creating Sequences for Your App
1. Simple sequence (manual timing)
from narractive.sequences.base import VideoSequence
class MyIntro(VideoSequence):
name = "Introduction"
sequence_id = "seq00"
duration_estimate = 30.0
obs_scene = "Main"
def execute(self, obs, app, config):
app.focus_app()
app.click_at("my_button")
app.wait(2.0)
app.scroll_down(3)
2. Timeline sequence (narration-synchronized)
from narractive.sequences.base import TimelineSequence
from narractive.core.timeline import NarrationCue
class MyDemo(TimelineSequence):
name = "Live Demo"
sequence_id = "seq01"
duration_estimate = 60.0
def build_timeline(self, obs, app, config):
return [
NarrationCue(
text="Welcome to the demo.",
actions=lambda: app.wait(1.0),
sync="during",
),
NarrationCue(
text="Let's open the settings.",
actions=lambda: app.click_at("settings_button"),
sync="after",
),
]
3. Multilingual diagram labels
from narractive.diagrams.i18n import DiagramLabels
labels = DiagramLabels(
labels={
"server": {"fr": "Serveur", "en": "Server", "pt": "Servidor"},
"client": {"fr": "Client", "en": "Client", "pt": "Cliente"},
},
titles={
"architecture": {"fr": "Architecture", "en": "Architecture"},
},
default_lang="fr",
)
name = labels.l("server", "en") # "Server"
4. Register sequences
Create my_project/sequences/__init__.py:
from my_project.sequences.seq00_intro import MyIntro
from my_project.sequences.seq01_demo import MyDemo
SEQUENCES = [MyIntro, MyDemo]
Then run:
narractive --list --sequences-package my_project.sequences
narractive --all --sequences-package my_project.sequences
Configuration
See config.template.yaml for all available options. Key sections:
| Section | Purpose |
|---|---|
obs |
OBS WebSocket connection, scenes, output directory |
app |
Window title, panel name, calibrated UI positions |
timing |
Click/type/scroll delays, transition pauses |
diagrams |
Mermaid rendering (resolution, theme, colors) |
narration |
TTS engine, voice, speed, F5-TTS/XTTS options |
subtitles |
SRT generation (enabled, max chars, max lines) |
capture |
Headless frame capture (FPS, resolution, display) |
output |
Final video encoding (resolution, fps, codec, quality preset) |
TTS Engines
| Engine | Cost | Quality | Multilingual | Setup |
|---|---|---|---|---|
| edge-tts | Free | Good | Yes | pip install edge-tts (included) |
| ElevenLabs | Paid | Excellent | Yes | pip install elevenlabs + API key |
| F5-TTS | Free | Excellent | No | Conda env + GPU recommended |
| XTTS v2 | Free | Excellent | Yes | pip install TTS + GPU recommended |
Requirements
- Python 3.10+
- FFmpeg (for video assembly)
- OBS Studio (desktop mode) or Docker (headless mode)
- Your target application installed and running
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file narractive-2.5.0.tar.gz.
File metadata
- Download URL: narractive-2.5.0.tar.gz
- Upload date:
- Size: 152.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2bd4bc3f8cc8558d20e8f8a114459a0408ba521f2aa6f65b26b8dee21681faa3
|
|
| MD5 |
25e9b562c5aa09a897fd7fe7a7859631
|
|
| BLAKE2b-256 |
c684b6e8135962af86cfd9228442137c3fb6134fcc1333b3335d0780ee341f19
|
Provenance
The following attestation bundles were made for narractive-2.5.0.tar.gz:
Publisher:
publish.yml on imagodata/narractive
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
narractive-2.5.0.tar.gz -
Subject digest:
2bd4bc3f8cc8558d20e8f8a114459a0408ba521f2aa6f65b26b8dee21681faa3 - Sigstore transparency entry: 1179582940
- Sigstore integration time:
-
Permalink:
imagodata/narractive@ed3aeb4bc2a61922bb2bc52f9cef2dd3481eaf10 -
Branch / Tag:
refs/tags/v2.5.0 - Owner: https://github.com/imagodata
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ed3aeb4bc2a61922bb2bc52f9cef2dd3481eaf10 -
Trigger Event:
release
-
Statement type:
File details
Details for the file narractive-2.5.0-py3-none-any.whl.
File metadata
- Download URL: narractive-2.5.0-py3-none-any.whl
- Upload date:
- Size: 132.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b0ff3b3b52926f71a261c64fef34e2388058931923e762cf3ae86a4adb8c5b1
|
|
| MD5 |
c7e01ddb9d6997bb25805f46c35214a8
|
|
| BLAKE2b-256 |
64039532de6e687e88e9f973d5bc1cd01ead6b3a7a978c005b5b9149347e3a9c
|
Provenance
The following attestation bundles were made for narractive-2.5.0-py3-none-any.whl:
Publisher:
publish.yml on imagodata/narractive
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
narractive-2.5.0-py3-none-any.whl -
Subject digest:
3b0ff3b3b52926f71a261c64fef34e2388058931923e762cf3ae86a4adb8c5b1 - Sigstore transparency entry: 1179582947
- Sigstore integration time:
-
Permalink:
imagodata/narractive@ed3aeb4bc2a61922bb2bc52f9cef2dd3481eaf10 -
Branch / Tag:
refs/tags/v2.5.0 - Owner: https://github.com/imagodata
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ed3aeb4bc2a61922bb2bc52f9cef2dd3481eaf10 -
Trigger Event:
release
-
Statement type: