Browser-based human-in-the-loop UIs for AI coding agents
Project description
OpenWebGoggles
AI coding agents are good at writing code. They are not good at showing you things. An agent can generate a 200-line diff, but it has no way to pull up a side-by-side review UI, highlight the parts that matter, and wait for you to say "approved" or "try again with fewer abstractions."
OpenWebGoggles fixes that. It gives any agent — Claude Code, a shell script, anything that can write JSON — the ability to open a browser-based UI and get structured decisions back from a human.
Not a chat interface. Not a terminal dump. A real interactive panel: forms, approval flows, dashboards, multi-step wizards. The kind of thing you'd build if you had a few days and a frontend team. Except the agent builds it on the fly from a JSON schema, and the whole round-trip takes seconds.
Agent ←→ OpenWebGoggles Server ←→ Browser UI ←→ Human
"The goggles — they do everything."
What This Actually Looks Like
Here's a concrete example. Your agent finishes a security audit and has 12 findings to triage. Without OpenWebGoggles, it dumps them into the terminal and asks you to type approve or reject twelve times. With OpenWebGoggles, it opens a tabbed wizard in your browser — one finding per screen, editable severity dropdowns, analyst notes, a progress bar — and reads back your structured decisions when you're done.
The agent doesn't need to know HTML. It writes a JSON object describing what it wants to show, and the built-in dynamic renderer handles the rest:
{
"title": "Security Finding 1 of 12",
"status": "waiting_input",
"data": {
"sections": [
{ "type": "text", "content": "**SQL Injection** in `/api/users` endpoint" },
{ "type": "form", "fields": [
{ "key": "severity", "label": "Severity", "type": "select",
"options": ["critical", "high", "medium", "low"], "value": "high" },
{ "key": "notes", "label": "Analyst Notes", "type": "textarea" }
]}
]
},
"actions_requested": [
{ "id": "confirm", "type": "approve", "label": "Confirmed" },
{ "id": "fp", "type": "reject", "label": "False Positive" }
]
}
The agent gets back:
{
"actions": [{
"action_id": "confirm",
"type": "approve",
"value": { "severity": "critical", "notes": "Escalated — no parameterized queries anywhere in this module." }
}]
}
Structured data in, structured data out. The browser is just the rendering layer in between.
Quick Start
Install from PyPI (pipx recommended — isolates dependencies and puts the binary on PATH):
pipx install openwebgoggles
Don't have pipx? Install it first:
brew install pipx && pipx ensurepath(macOS) orpip install --user pipx && pipx ensurepath(Linux). Or use plainpip install openwebgogglesif you prefer.
Then bootstrap for your editor:
Claude Code
openwebgoggles init claude
This creates .mcp.json and .claude/settings.json with the right permissions. Restart Claude Code and you're live.
OpenCode
openwebgoggles init opencode
This creates opencode.json with the MCP server configured. Restart OpenCode and you're live.
Try It
Tell your agent:
"Show me a review UI for these changes and wait for my approval."
"Create a dashboard showing the build progress."
"Walk me through these security findings one at a time with severity dropdowns."
The agent figures out the JSON schema, calls webview_ask, and a panel opens in your browser. You make your decisions, click approve, and the agent continues with your structured response.
What Gets Installed
Four MCP tools — that's the entire API surface:
| Tool | What it does |
|---|---|
webview_ask(state) |
Show a UI and block until the human responds |
webview_show(state) |
Show a UI without blocking (dashboards, progress) |
webview_read() |
Poll for actions without blocking |
webview_close() |
Close the session |
Manual Setup
If you'd rather configure things by hand, add to your project's .mcp.json:
{
"mcpServers": {
"openwebgoggles": {
"command": "openwebgoggles"
}
}
}
Or for OpenCode, add to opencode.json:
{
"mcp": {
"openwebgoggles": {
"type": "local",
"command": ["openwebgoggles"],
"enabled": true
}
}
}
Bash Scripts (for shell-based agents)
If your agent orchestrates via shell scripts — or if you just want to understand the mechanics — the bash interface exposes the same capabilities:
# Start a session
bash scripts/start_webview.sh --app dynamic
# Push state to the browser
bash scripts/write_state.sh '{"version":1, "status":"pending_review", "title":"Review Changes", ...}'
# Block until the human responds (up to 5 minutes)
ACTIONS=$(bash scripts/wait_for_action.sh --timeout 300)
# Clean up
bash scripts/stop_webview.sh
| Script | Purpose |
|---|---|
start_webview.sh --app <name> [--port N] |
Launch server and open browser |
write_state.sh '<json>' |
Atomic state write |
wait_for_action.sh [--timeout N] |
Block until human acts |
read_actions.sh [--clear] |
Read actions, optionally clear |
stop_webview.sh |
Graceful shutdown |
init_webview_app.sh <name> |
Scaffold a custom app |
How It Works Under the Hood
The architecture is deliberately simple. Three JSON files in a .openwebgoggles/ directory are the entire interface between the agent and the browser.
| File | Direction | Purpose |
|---|---|---|
state.json |
Agent → Browser | What to show: data, UI schema, requested actions |
actions.json |
Browser → Agent | What the human decided |
manifest.json |
Shared | Session config: ports, app name, auth token |
The Python server watches these files and pushes updates to the browser over WebSocket in real time. The browser renders the UI and writes responses back. The agent reads the response file and continues.
This means you can debug the entire system by looking at three JSON files. No hidden state, no message queues, no databases. If something looks wrong in the browser, cat .openwebgoggles/state.json and you'll see exactly what the agent sent.
The Dynamic Renderer
Most use cases don't require custom HTML. The built-in dynamic app takes a JSON schema and renders a complete, styled interface.
Section types: text, items, form, actions
Form field types: text, textarea, number, select, checkbox, email, url, static
Action styles: primary, success, danger, warning, ghost, approve, reject, submit, delete
You can combine these to build approval flows, configuration wizards, data entry forms, triage interfaces — really any structured interaction that runs on fields, selections, and decisions. For 80% of use cases, you never touch HTML.
Custom Apps
When the dynamic renderer isn't enough — complex visualizations, custom layouts, domain-specific interactions — you can build a custom app:
bash scripts/init_webview_app.sh my-dashboard
This scaffolds index.html, app.js, and style.css with the SDK already wired up. The client SDK is vanilla JavaScript with zero dependencies:
const wv = new OpenWebGoggles();
await wv.connect();
// Listen for state updates from the agent
wv.onStateUpdate((state) => {
// Render however you want
});
// Send structured responses back
await wv.approve("action-id", { comment: "Looks good" });
await wv.reject("action-id");
await wv.submitInput("field-id", "user input");
await wv.sendAction("custom-id", "custom", { any: "data" });
Two working examples are included in examples/:
- approval-review — Code review UI with unified diffs, per-file toggles, approve/reject with comments
- security-qa — Step-by-step security findings triage with editable fields, severity dropdowns, and a progress bar
These aren't toy demos. They're functional interfaces that handle real workflows. Start by reading their source if you're building something custom.
Patterns That Work Well
Single approval. Agent shows a summary, human clicks approve or reject. The simplest case, and probably the most common.
Multi-step wizard. For N items that need review, show one at a time. The agent calls webview_ask in a loop, advancing to the next item after each response. This avoids overwhelming the user with a wall of decisions.
Live dashboard. Agent calls webview_show (non-blocking) to display progress, then updates state periodically. Useful for long-running operations where the human wants visibility but doesn't need to act.
Batch triage. Show all items at once with per-item actions — tabs, cards, or a list with inline controls. Works well when the total count is under 10 or so.
Security
The trust model is straightforward: the agent and the browser are on the same machine, and nobody else should be able to read or tamper with the communication between them.
Nine defense layers enforce this, all enabled by default:
- Localhost-only binding — the server only listens on 127.0.0.1
- Bearer token auth — 32-byte session token, constant-time comparison
- WebSocket first-message auth — token verified before any data flows
- Ed25519 signatures — server signs every state update (cryptographic proof of origin)
- HMAC-SHA256 — browser signs every action (tamper detection)
- Nonce replay prevention — each action can only be submitted once
- Content Security Policy — per-request nonce blocks inline script injection
- SecurityGate — 22 XSS patterns, zero-width character detection, schema validation
- Rate limiting — 30 actions per minute per session
All cryptographic keys are ephemeral — generated in memory at session start, zeroed on shutdown, never written to disk in plaintext. The test suite covers OWASP Top 10, MITRE ATT&CK techniques, and LLM-specific attack vectors across 471 tests.
The tradeoff is real, though. This level of defense adds complexity to the codebase. If you're running in a fully trusted local environment and want to understand what each layer does, the security tests are the best documentation.
Development
# Run the full test suite
python -m pytest -v
# Lint
ruff check scripts/
Python 3.11+ required. Core dependencies: websockets, PyNaCl, mcp.
Reference Documentation
For the full details:
- Data Contract — JSON file formats, state lifecycle, status values
- SDK API — Complete client SDK reference
- Integration Guide — Step-by-step patterns for connecting from other tools
License
Apache License 2.0 — see LICENSE.
Built by Techtoboggan.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openwebgoggles-0.4.0.tar.gz.
File metadata
- Download URL: openwebgoggles-0.4.0.tar.gz
- Upload date:
- Size: 469.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5a1e7359aa580344358e22b942e5b92ec14140894844de417aad543838850a87
|
|
| MD5 |
ee700cf77a8d5d238fc4635e008a03c8
|
|
| BLAKE2b-256 |
e5b1dd6aee5d8a11a897440596490a8ff802b4cbd7b3cd97f44d34cf360bb33e
|
Provenance
The following attestation bundles were made for openwebgoggles-0.4.0.tar.gz:
Publisher:
publish.yml on techtoboggan/openwebgoggles
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openwebgoggles-0.4.0.tar.gz -
Subject digest:
5a1e7359aa580344358e22b942e5b92ec14140894844de417aad543838850a87 - Sigstore transparency entry: 995159668
- Sigstore integration time:
-
Permalink:
techtoboggan/openwebgoggles@d18c64112fb2de9e6915bff6a83cf1ae6ea986b4 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/techtoboggan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d18c64112fb2de9e6915bff6a83cf1ae6ea986b4 -
Trigger Event:
release
-
Statement type:
File details
Details for the file openwebgoggles-0.4.0-py3-none-any.whl.
File metadata
- Download URL: openwebgoggles-0.4.0-py3-none-any.whl
- Upload date:
- Size: 441.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6992264579f3231f3bd535dfebaa3e4c56ce0eaec13becb0733113aa338e7d18
|
|
| MD5 |
b4c78ad2b4d88b03d8a79fd439b47ea3
|
|
| BLAKE2b-256 |
6def53f0f948838d020e254f343d779848437e354c901619037d2eda2f87e098
|
Provenance
The following attestation bundles were made for openwebgoggles-0.4.0-py3-none-any.whl:
Publisher:
publish.yml on techtoboggan/openwebgoggles
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openwebgoggles-0.4.0-py3-none-any.whl -
Subject digest:
6992264579f3231f3bd535dfebaa3e4c56ce0eaec13becb0733113aa338e7d18 - Sigstore transparency entry: 995159670
- Sigstore integration time:
-
Permalink:
techtoboggan/openwebgoggles@d18c64112fb2de9e6915bff6a83cf1ae6ea986b4 -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/techtoboggan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d18c64112fb2de9e6915bff6a83cf1ae6ea986b4 -
Trigger Event:
release
-
Statement type: