Standalone HTML viewer generation for OpenAdapt ML dashboards and benchmarks
Project description
openadapt-viewer
Reusable component library for OpenAdapt visualization. Build standalone HTML viewers for training dashboards, benchmark results, capture playback, and demo retrieval.
Features
- Component-based: Reusable building blocks (screenshot, playback, metrics, filters)
- Composable: Combine components to build custom viewers
- Standalone HTML: Generated files work offline, no server required
- Event transcript: Real-time audio transcription synchronized with playback
- Consistent styling: Shared CSS variables and dark mode support
- Alpine.js integration: Lightweight interactivity out of the box
Installation
pip install openadapt-viewer
Or with uv:
uv add openadapt-viewer
Quick Start
Using Components
from openadapt_viewer.components import (
screenshot_display,
playback_controls,
metrics_grid,
filter_bar,
badge,
)
# Screenshot with click overlays
html = screenshot_display(
image_path="screenshot.png",
overlays=[
{"type": "click", "x": 0.5, "y": 0.3, "label": "H", "variant": "human"},
{"type": "click", "x": 0.6, "y": 0.4, "label": "AI", "variant": "predicted"},
],
)
# Metrics cards
html = metrics_grid([
{"label": "Total Tasks", "value": 100},
{"label": "Passed", "value": 75, "color": "success"},
{"label": "Failed", "value": 25, "color": "error"},
{"label": "Success Rate", "value": "75%", "color": "accent"},
])
Using PageBuilder
Build complete pages from components:
from openadapt_viewer.builders import PageBuilder
from openadapt_viewer.components import metrics_grid, screenshot_display
builder = PageBuilder(title="My Viewer", include_alpine=True)
builder.add_header(
title="Benchmark Results",
subtitle="Model: gpt-5.1",
nav_tabs=[
{"href": "dashboard.html", "label": "Training"},
{"href": "viewer.html", "label": "Viewer", "active": True},
],
)
builder.add_section(
metrics_grid([
{"label": "Tasks", "value": 100},
{"label": "Passed", "value": 75, "color": "success"},
]),
title="Summary",
)
# Render to file
builder.render_to_file("output.html")
Ready-to-Use Viewers
5 production viewers available:
- Benchmark Viewer - Visualize benchmark evaluation results
- Capture Viewer - Playback recorded GUI interactions
- Training Dashboard - Monitor ML training progress (via openadapt-ml)
- Retrieval Viewer - Display demo search results (via openadapt-retrieval)
- Segmentation Viewer - View episode segmentation results
from openadapt_viewer.viewers.benchmark import generate_benchmark_html
# From benchmark results directory
generate_benchmark_html(
data_path="benchmark_results/run_001/",
output_path="viewer.html",
)
All viewers use the canonical component-based pattern. See VIEWER_PATTERNS.md for details.
CLI Usage
# Generate demo benchmark viewer
openadapt-viewer demo --tasks 10 --output viewer.html
# Generate from benchmark results
openadapt-viewer benchmark --data results/run_001/ --output viewer.html
Components
All components return HTML strings that can be composed together. Use them with PageBuilder or embed inline.
Core Components
| Component | Description | Example Use Case |
|---|---|---|
screenshot_display |
Screenshot with click/highlight overlays | Capture frames, demo screenshots |
playback_controls |
Play/pause/speed controls for step playback | Video-like playback |
timeline |
Progress bar for step navigation | Scrub through recordings |
action_display |
Format actions (click, type, scroll, etc.) | Display action details |
metrics_card |
Single statistic card | Individual metric display |
metrics_grid |
Grid of metric cards | Summary dashboards |
filter_bar |
Filter dropdowns with optional search | Filter and search data |
filter_dropdown |
Single dropdown filter | Domain/status filters |
selectable_list |
List with selection support | Task lists, file lists |
list_item |
Individual list item | Custom list entries |
badge |
Status badges (pass/fail, etc.) | Status indicators |
Enhanced Components
| Component | Description | Example Use Case |
|---|---|---|
video_playback |
Video playback from screenshot sequences | Smooth capture playback |
video_playback_with_actions |
Video + synchronized action overlay | Capture with action overlay |
action_timeline |
Timeline with action markers | Action sequence view |
action_timeline_vertical |
Vertical action timeline | Compact action view |
comparison_view |
Side-by-side comparison | Before/after, A/B test |
overlay_comparison |
Overlay comparison with slider | Image comparison |
action_type_filter |
Filter by action type | Filter clicks/types/scrolls |
action_type_pills |
Action type pill buttons | Quick action filtering |
action_type_dropdown |
Action type dropdown | Compact action filter |
failure_analysis_panel |
Failure analysis dashboard | Benchmark failure analysis |
failure_summary_card |
Failure summary card | Individual failure details |
Total: 22 components available for building viewers.
See VIEWER_PATTERNS.md for complete usage examples.
Project Structure
src/openadapt_viewer/
├── components/ # Reusable UI building blocks
│ ├── screenshot.py # Screenshot with overlays
│ ├── playback.py # Playback controls
│ ├── timeline.py # Progress bar
│ ├── action_display.py # Action formatting
│ ├── metrics.py # Stats cards
│ ├── filters.py # Filter dropdowns
│ ├── list_view.py # Selectable lists
│ └── badge.py # Status badges
├── builders/ # High-level page builders
│ └── page_builder.py # PageBuilder class
├── styles/ # Shared CSS
│ └── core.css # CSS variables and base styles
├── core/ # Core utilities
│ ├── types.py # Pydantic models
│ └── html_builder.py # Jinja2 utilities
├── viewers/ # Full viewer implementations
│ └── benchmark/ # Benchmark results viewer
├── examples/ # Reference implementations
│ ├── benchmark_example.py
│ ├── training_example.py
│ ├── capture_example.py
│ └── retrieval_example.py
└── templates/ # Jinja2 templates
Audio Transcript Feature
The viewer includes a powerful audio transcript feature that displays real-time transcription of captured audio alongside the visual playback. This is particularly useful for:
- Debugging workflows: See what was said at each step
- Documentation: Auto-generate narrative descriptions of recorded sessions
- Analysis: Correlate verbal instructions with UI actions
- Training: Review narrated demonstrations with synchronized visuals
Key Capabilities
The transcript panel provides:
- Timestamped transcription: Each transcript segment is stamped with its time in the recording (e.g.,
0:00.00,0:05.60) - Synchronized playback: Transcript automatically highlights and scrolls as the video plays
- Searchable text: Find specific moments in long recordings by searching transcript content
- Copy functionality: Export transcript text for documentation or analysis
How It Works
When captures are recorded with audio (using openadapt-capture's audio recording features), the viewer automatically:
- Displays the transcript in a dedicated panel in the sidebar
- Timestamps each transcript segment relative to the recording start time
- Syncs transcript highlighting with the current playback position
- Updates the displayed transcript as you navigate through events
The transcript appears alongside the event list and event details, providing a complete picture of what happened during the recording.
Synthetic Demo Viewer
NEW: Interactive browser-based viewer for synthetic WAA demonstration data.
Quick Start
# Open the synthetic demo viewer
open synthetic_demo_viewer.html
What It Shows
- 82 synthetic demos across 6 domains (notepad, paint, clock, browser, file_explorer, office)
- Filter by domain and select specific tasks
- View demo content with syntax-highlighted steps
- See how demos are used in actual API prompts
- Impact comparison: 33% → 100% accuracy improvement with demo-conditioned prompting
- Action reference: All 8 action types (CLICK, TYPE, WAIT, etc.)
Purpose
Synthetic demos are AI-generated example trajectories that show step-by-step how to complete Windows automation tasks. They are included in prompts when calling Claude/GPT APIs during benchmark evaluation - this is called demo-conditioned prompting.
Impact: Improved first-action accuracy from 33% to 100%!
Documentation
- Quick Start:
QUICK_REFERENCE.md- One-page overview - Complete Guide:
SYNTHETIC_DEMOS_EXPLAINED.md- Full explanation - Examples:
DEMO_EXAMPLES_SHOWCASE.md- 5 diverse demo examples - Master Index:
SYNTHETIC_DEMO_INDEX.md- Central navigation hub
Features
- Beautiful dark theme matching OpenAdapt style
- Domain filtering (All, Notepad, Paint, Clock, Browser, File Explorer, Office)
- Task selector with estimated step counts
- Dual-panel display: demo content + prompt usage
- Side-by-side impact comparison (with vs without demos)
- Complete action types reference
- Fully self-contained (no external dependencies)
- Works offline
See SYNTHETIC_DEMO_INDEX.md for complete documentation.
Screenshots
Full Viewer Interface
The viewer provides a complete interface for exploring captured GUI interactions with playback controls, timeline navigation, event details, and real-time audio transcript.
Interactive viewer showing the "Turn off Night Shift" workflow with screenshot display (center), event list (right sidebar top), and audio transcript (right sidebar bottom)
Playback Controls
Step through captures with playback controls, timeline scrubbing, and keyboard shortcuts (Space to play/pause, arrow keys to navigate).
Timeline and playback controls with overlay toggle, plus event details and synchronized transcript panel
Event List, Details, and Transcript
Browse all captured events with detailed information about each action. The transcript panel displays timestamped audio transcription that syncs with playback, showing exactly what was said at each moment in the recording.
Event list sidebar showing captured actions with timing and type information, plus live audio transcript with timestamps
Demo Workflow
Example demo workflow viewer
Examples
Run the examples to see how different OpenAdapt packages can use the component library:
# Benchmark results (openadapt-evals)
python -m openadapt_viewer.examples.benchmark_example
# Training dashboard (openadapt-ml)
python -m openadapt_viewer.examples.training_example
# Capture playback (openadapt-capture)
python -m openadapt_viewer.examples.capture_example
# Retrieval results (openadapt-retrieval)
python -m openadapt_viewer.examples.retrieval_example
Generating Screenshots
To regenerate the README screenshots:
# Install playwright (one-time setup)
uv pip install "openadapt-viewer[screenshots]"
uv run playwright install chromium
# Install openadapt-capture (required)
cd ../openadapt-capture
uv pip install -e .
cd ../openadapt-viewer
# Generate screenshots
uv run python scripts/generate_readme_screenshots.py
# Or with custom options
uv run python scripts/generate_readme_screenshots.py \
--capture-dir /path/to/openadapt-capture \
--output-dir docs/images \
--max-events 50
The script will:
- Load captures from
openadapt-capture(turn-off-nightshift and demo_new) - Generate interactive HTML viewers
- Take screenshots using Playwright
- Save screenshots to
docs/images/
Development
# Clone and install
git clone https://github.com/OpenAdaptAI/openadapt-viewer.git
cd openadapt-viewer
uv sync --all-extras
# Run tests
uv run pytest tests/ -v
# Run linter
uv run ruff check .
Integration
Used by other OpenAdapt packages:
- openadapt-ml: Training dashboards and model comparison
- openadapt-evals: Benchmark result visualization
- openadapt-capture: Capture recording playback
- openadapt-retrieval: Demo search result display
Documentation
- VIEWER_PATTERNS.md - Canonical pattern for building viewers (MUST READ for new viewers)
- MIGRATION_GUIDE.md - Step-by-step guide for converting inline viewers to component-based
- ARCHITECTURE.md - System architecture and design patterns
- CATALOG_SYSTEM.md - Automatic recording discovery and indexing
- SEARCH_FUNCTIONALITY.md - Token-based search implementation
- EPISODE_TIMELINE_QUICKSTART.md - Adding episode timelines to viewers
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openadapt_viewer-0.2.0.tar.gz.
File metadata
- Download URL: openadapt_viewer-0.2.0.tar.gz
- Upload date:
- Size: 1.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b62d8d3ea130df9f1c31e6afd5932caf6e336a307d09bb44dac04aa67277e15
|
|
| MD5 |
b2fe737242f050d18f09b42a2cd6eebc
|
|
| BLAKE2b-256 |
c1bf050d317b5ef5be0d9104fee14d7d7e068e40fa0c63e403c94bb7f91989f3
|
Provenance
The following attestation bundles were made for openadapt_viewer-0.2.0.tar.gz:
Publisher:
release.yml on OpenAdaptAI/openadapt-viewer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openadapt_viewer-0.2.0.tar.gz -
Subject digest:
3b62d8d3ea130df9f1c31e6afd5932caf6e336a307d09bb44dac04aa67277e15 - Sigstore transparency entry: 871197005
- Sigstore integration time:
-
Permalink:
OpenAdaptAI/openadapt-viewer@c769702ae91be31f44149582a6be33c21fe2342f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/OpenAdaptAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c769702ae91be31f44149582a6be33c21fe2342f -
Trigger Event:
push
-
Statement type:
File details
Details for the file openadapt_viewer-0.2.0-py3-none-any.whl.
File metadata
- Download URL: openadapt_viewer-0.2.0-py3-none-any.whl
- Upload date:
- Size: 135.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bdfd84545afed548ebfc9110528cd3888265ca7fffcf4738a9f882d4f0ce0969
|
|
| MD5 |
e5bdcee83b5d43968292dbad8f4f7169
|
|
| BLAKE2b-256 |
eddbf387e116f2a7bf6cbe8fe9bdf317426387928d07fc506ad54283a2fe1667
|
Provenance
The following attestation bundles were made for openadapt_viewer-0.2.0-py3-none-any.whl:
Publisher:
release.yml on OpenAdaptAI/openadapt-viewer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openadapt_viewer-0.2.0-py3-none-any.whl -
Subject digest:
bdfd84545afed548ebfc9110528cd3888265ca7fffcf4738a9f882d4f0ce0969 - Sigstore transparency entry: 871197020
- Sigstore integration time:
-
Permalink:
OpenAdaptAI/openadapt-viewer@c769702ae91be31f44149582a6be33c21fe2342f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/OpenAdaptAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c769702ae91be31f44149582a6be33c21fe2342f -
Trigger Event:
push
-
Statement type: