Deterministic DSL-first browser automation platform powered by Playwright heuristics with optional local AI self-healing (Ollama)
Project description
ManulEngine
Deterministic, DSL-first web and desktop automation on top of Playwright, with explainable heuristics, a standalone Python API, and optional local AI fallback.
Status
Status: Alpha. Developed by a single person.
This project is actively being battle-tested. Bugs are expected, APIs may evolve, and there are no promises about stability or production readiness. The core claim is transparency: when a step works, you should understand why; when it fails, you should have enough signal to diagnose it.
Core Philosophy
ManulEngine is an interpreter for the .hunt DSL. A hunt file expresses intent in plain English, the runtime snapshots the DOM, ranks candidates with heuristics, and executes through Playwright.
Determinism first
The primary resolver is not an LLM. It is a deterministic scoring system backed by DOM traversal and weighted heuristics:
- DOM collection uses a native
TreeWalkerin injected JavaScript. - Candidate ranking is handled by
DOMScorer. - Scores are normalized on a
0.0to1.0confidence scale. - Weighted channels include
cache,semantics,text,attributes, andproximity.
That means the engine can explain more than "element not found". It can show whether a target lost because text affinity was weak, semantic alignment was poor, the candidate was hidden, or another channel outweighed it.
Transparency instead of AI magic
The recommended default is heuristics-only mode:
{
"model": null,
"browser": "chromium",
"controls_cache_enabled": true,
"semantic_cache_enabled": true
}
When a local Ollama model is enabled, it acts as a fallback for ambiguous cases rather than the primary execution path.
Dual-persona workflow
The authoring model is intentionally split across two layers:
- QA, analysts, and operators write plain-English
.huntsteps. - SDETs extend those flows with Python hooks, lifecycle setup, and custom controls when a UI or backend dependency should not be forced into the generic DSL path.
The intended boundary is straightforward:
- Keep business intent and readable flow in the DSL.
- Keep environment setup, backend interaction, and custom widget handling in Python.
Why ManulEngine
Most browser automation tools sold as AI automation are cloud wrappers around selectors and retries. ManulEngine is aiming at the opposite design.
Deterministic first, not AI-first
The runtime resolves DOM elements through a native JavaScript TreeWalker plus a weighted DOMScorer. That gives you a repeatable result from page state plus step text, not from prompt variance.
Explainable instead of opaque
When the engine chooses the wrong target, you should be able to inspect the actual scoring channels that drove the result. The point is not just success cases. The point is actionable failure analysis.
One artifact for two personas
QA, ops, and analysts can keep the flow readable in .hunt. SDETs can attach Python, lifecycle hooks, and custom controls without splitting the scenario into two separate systems.
Optional AI fallback, off by default
"model": null remains the recommended default. When a local Ollama model is enabled, it is a fallback for ambiguous cases, not the primary execution engine.
Four Automation Pillars
ManulEngine is not only a test runner. The same runtime and the same DSL can cover four adjacent use cases:
- QA and E2E testing
- RPA workflows
- Synthetic monitoring
- AI agent execution targets
QA and E2E testing
Write plain-English flows, verify outcomes, attach reports and screenshots when needed, and keep selectors out of the test source.
RPA workflows
Use the same DSL to log into portals, download files, fill forms, extract values, and hand work to Python when a backend or filesystem step is involved.
Synthetic monitoring
Pair .hunt files with @schedule: and manul daemon to run scheduled health checks with the same execution model as your test flows.
AI agent execution targets
If an external agent needs to drive the browser, .hunt is a safer constrained target than raw Playwright code because the runtime still owns validation, scoring, retries, and reporting.
Key Features
Explainability layers
The runtime and companion Manul Engine Extension for VS Code expose multiple explainability layers instead of forcing you to inspect a terminal dump.
CLI: --explain
manul --explain path/to/file.hunt
manul --explain --headless path/to/hunts/ --html-report
That mode prints candidate rankings and per-channel scoring breakdowns for each resolved step.
Representative CLI explain output:
┌─ EXPLAIN: Target = "Login"
│ Step: Click the 'Login' button
│
│ #1 <button> "Login"
│ total: 0.593
│ text: 0.281
│ attributes: 0.050
│ semantics: 0.225
│ proximity: 0.037
│ cache: 0.000
│
└─ Decision: selected "Login" with score 0.593
VS Code: title bar action
During a debug pause, the extension exposes Explain Current Step in the editor title bar so you can request explanation data for the paused step without leaving the editor.
VS Code: hover tooltips in debug mode
Run a hunt in Debug mode through Test Explorer, then hover over any resolved step line in the .hunt file. The extension shows the stored per-channel breakdown directly on that line.
What-If Analysis REPL (Explain Next)
During a debug pause in the terminal (--debug), type w to enter the interactive What-If Analysis REPL. This REPL is terminal-only; it is unavailable in VS Code extension protocol mode (--break-lines) because stdin is reserved for debug control tokens.
For a one-shot evaluation without entering the REPL, type e in terminal mode or use the explain-next token in extension protocol mode. The extension protocol also accepts an optional JSON payload to evaluate a different step: explain-next {"step":"Click the 'Cancel' button"}.
The REPL and one-shot mode both evaluate hypothetical steps against the live browser state without executing them. They capture a read-only DOM snapshot, run DOMScorer heuristics in-memory, optionally query the configured LLM, and return a 0–10 confidence score with an explanation.
[DEBUG] Next step: Click the 'Submit' button
ENTER/n = execute · e = explain-next · h = re-highlight · w = what-if · pause = Inspector · c = continue all… w
🔮 explain-next> Click the 'Cancel' button
┌─ 🔮 WHAT-IF ANALYSIS: "Click the 'Cancel' button"
│ Confidence: 9/10 (HIGH)
│ Heuristic Score: 0.587 (raw 104382)
│ Best Heuristic Match: "Cancel"
│ Target Element: <button> "Cancel"
│ Explanation: Button found and enabled, would navigate back to the form list.
│ Risk: Unsaved form data would be lost.
└─ 🔮 END
🔮 explain-next> !execute
✅ Executing: "Click the 'Cancel' button"
REPL commands:
| Command | Action |
|---|---|
<any DSL step> |
Evaluate hypothetically — no page mutation |
!execute |
Accept the last evaluated step and resume execution |
!execute N |
Accept evaluation #N from history |
!history |
Show all evaluations from this session |
!context |
Show current page URL and title |
!quit |
Exit REPL without executing anything |
The best heuristic match is highlighted with a persistent magenta outline on the live page so you can visually confirm the target before committing.
Desktop and Electron automation via executable_path
ManulEngine is not limited to browser tabs. Because it runs on Playwright, it can also drive Electron-based desktop applications.
Set executable_path in the runtime config and use OPEN APP instead of NAVIGATE:
{
"model": null,
"browser": "chromium",
"executable_path": "/path/to/YourElectronApp"
}
@context: Desktop smoke test
@title: Desktop Smoke
STEP 1: Attach to the window
OPEN APP
VERIFY that 'Welcome' is present
STEP 2: Exercise the main screen
Click the 'Settings' button
VERIFY that 'Preferences' is present
DONE.
Smart recorder for native controls
The recorder is meant to capture intent, not just raw pointer activity. A concrete example is native <select> handling: the injected recorder observes semantic change events and emits DSL such as Select 'Option' from 'Dropdown' instead of recording a brittle chain of low-level clicks on <option> elements.
Python hooks and custom controls
When the generic resolver should not be forced to understand a bespoke widget, ManulEngine provides an explicit SDET escape hatch:
[SETUP]/[TEARDOWN]hooks for environment and data setup.CALL PYTHONfor backend lookups or computed values.@before_all/@after_alllifecycle hooks for suite-wide orchestration.@custom_controlhandlers for complex UI elements.
That balance is intentional: keep the common path readable, and keep the edge cases programmable.
Public Python API (ManulSession)
For users who prefer writing automation in pure Python, the runtime exports ManulSession: an async context manager that owns the Playwright lifecycle and exposes clean methods for navigation, clicks, fills, verifications, and extraction.
from manul_engine import ManulSession
async with ManulSession(headless=True) as session:
await session.navigate("https://example.com/login")
await session.fill("Username field", "admin")
await session.fill("Password field", "secret")
await session.click("Log in button")
await session.verify("Welcome")
price = await session.extract("Product Price")
ManulSession can also execute raw DSL snippets against the already-open browser via run_steps():
async with ManulSession() as session:
await session.navigate("https://example.com")
result = await session.run_steps("""
STEP 1: Search
Fill 'Search' with 'ManulEngine'
PRESS Enter
VERIFY that 'Results' is present
""")
assert result.status == "pass"
State, variables, and scope
Variable handling is strict rather than ad hoc. The runtime supports @var:, @script:, EXTRACT, SET, and CALL PYTHON ... into {var} with deterministic placeholder substitution in downstream steps.
Useful patterns:
@var:for static test data at the top of the file.@script:for file-local aliases such as@script: {auth} = scripts.auth_helpers, thenCALL PYTHON {auth}.issue_token into {token}; or callable aliases such as@script: {issue_token} = scripts.auth_helpers.issue_token, thenCALL PYTHON {issue_token} into {token}.EXTRACT ... into {var}for values pulled from the UI.SET {var} = valuefor mid-run assignment.CALL PYTHON module.func into {var}for backend-generated runtime values such as OTPs or tokens.
Scope precedence is explicit:
| Priority | Scope | Source |
|---|---|---|
| 1 | Row vars | @data: iteration values |
| 2 | Step vars | EXTRACT, SET, CALL PYTHON ... into {var} |
| 3 | Mission vars | @var: declarations |
| 4 | Global vars | lifecycle hooks and process-level state |
| 5 | Import vars | @var: inherited from @import: source files |
Tags and data-driven runs
The runtime also supports selective execution and data-driven loops without changing the DSL model.
@tags: smoke, auth
@data: users.csv
manul path/to/hunts/ --tags smoke
Lifecycle orchestration and hooks
There are two levels of Python orchestration:
- Per-file
[SETUP]/[TEARDOWN]and inlineCALL PYTHONfor file-local setup or backend calls. - Suite-level
manul_hooks.pywith@before_all,@after_all,@before_group, and@after_groupfor shared state across multiple hunts.
Benchmarks and test coverage
The repo ships with both synthetic tests and adversarial fixtures. The point is not to claim maturity. The point is to show that the scoring model, parser, hooks, recorder, scheduler, and reporter are exercised against concrete failure modes.
python run_tests.pyruns the synthetic and unit suite.demo/benchmarks/run_benchmarks.pyexercises dynamic IDs, overlapping traps, nested tables, and custom dropdown fixtures.demo/tests/*.huntholds integration-style hunts for real browser flows — run them withpython demo/run_demo.py.
Getting Started
Install
pip install manul-engine==0.0.9.27
playwright install
If you install standalone Python dependencies manually instead of using the packaged extras, the current minimums in this release line are playwright==1.58.0 and ollama==0.6.1.
Optional local AI fallback:
pip install "manul-engine[ai]==0.0.9.27"
ollama pull qwen2.5:0.5b
ollama serve
Manul Engine Extension
ManulEngine has a companion Manul Engine Extension for VS Code. Normal installation should use the published Marketplace build:
code --install-extension manul-engine.manul-engine
MCP Server for Copilot Chat
A separate VS Code extension turns ManulEngine into a native MCP server so GitHub Copilot chat can drive a real browser through natural language:
code --install-extension manul-engine.manul-mcp-server
After installation and Reload Window, ManulMcpServer appears in the MCP Servers panel and Copilot gains the following tools:
| Tool | What it does |
|---|---|
manul_run_step |
Run a single DSL step or natural-language action in the browser |
manul_run_goal |
Convert a natural-language goal into steps and execute them |
manul_run_hunt |
Run a full .hunt document passed as text |
manul_run_hunt_file |
Run a .hunt file from disk |
manul_validate_hunt |
Validate a .hunt document without running it |
manul_normalize_step |
Preview how a step will be normalized to DSL before sending it |
manul_get_state |
Get current browser and session state |
manul_preview_goal |
Preview goal-to-DSL conversion without execution |
manul_scan_page |
List all interactive elements on the current page |
manul_save_hunt |
Save a .hunt file to disk |
The MCP bridge maintains a persistent Playwright session across calls. No separate HTTP server is required — the extension spawns a Python runner directly.
Natural-language input is accepted for manul_run_step and manul_run_goal and normalized to proper DSL before execution:
# These are equivalent:
manul_run_step: click login
manul_run_step: Click the 'login' button
See the ManulMcpServer repository for the full developer guide.
Configuration
Create manul_engine_configuration.json in the workspace root. All keys are optional, but this file is the main runtime control plane:
{
"model": null,
"browser": "chromium",
"browser_args": [],
"headless": false,
"ai_always": false,
"ai_policy": "prior",
"ai_threshold": null,
"timeout": 5000,
"nav_timeout": 30000,
"controls_cache_enabled": true,
"controls_cache_dir": "cache",
"semantic_cache_enabled": true,
"custom_controls_dirs": ["controls"],
"log_name_maxlen": 0,
"log_thought_maxlen": 0,
"tests_home": "tests",
"auto_annotate": false,
"executable_path": null,
"channel": null,
"workers": 1,
"retries": 0,
"screenshot": "on-fail",
"html_report": false
}
Notes:
model: nullkeeps the runtime fully heuristics-only.browser_argspasses extra launch flags to the browser.ai_always,ai_policy, andai_thresholdonly matter when a model is enabled.controls_cache_dir,tests_home, andauto_annotatecontrol runtime filesystem behavior.custom_controls_dirslists directories where@custom_controlPython modules are scanned. Default:["controls"].channeltargets an installed browser such as Chrome or Edge.executable_pathtargets a custom executable such as an Electron app.
Environment variables always win over JSON config:
export MANUL_HEADLESS=true
export MANUL_BROWSER=firefox
export MANUL_MODEL=qwen2.5:0.5b
export MANUL_WORKERS=4
export MANUL_EXPLAIN=true
Configuration reference:
| Key | Default | Description |
|---|---|---|
model |
null |
Ollama model name. null keeps the runtime heuristics-only. |
headless |
false |
Hide the browser window. |
browser |
"chromium" |
Browser engine: chromium, firefox, or webkit. |
browser_args |
[] |
Extra launch flags for the browser. |
ai_threshold |
auto | Score threshold before optional LLM fallback. |
ai_always |
false |
Always ask the LLM picker. Only makes sense when model is set. |
ai_policy |
"prior" |
Treat heuristic score as a prior hint or as a strict constraint. |
controls_cache_enabled |
true |
Enable the persistent per-site controls cache. |
controls_cache_dir |
"cache" |
Cache directory relative to CWD or absolute path. |
semantic_cache_enabled |
true |
Enable in-session semantic cache reuse. |
custom_controls_dirs |
["controls"] |
List of directories scanned for @custom_control Python modules. Resolved relative to CWD. |
timeout |
5000 |
Default action timeout in ms. |
nav_timeout |
30000 |
Navigation timeout in ms. |
log_name_maxlen |
0 |
Truncate element names in logs. 0 means no limit. |
log_thought_maxlen |
0 |
Truncate LLM thought strings in logs. 0 means no limit. |
workers |
1 |
Max hunt files to run in parallel. |
tests_home |
"tests" |
Default output directory for new hunts and scan output. |
auto_annotate |
false |
Insert # 📍 Auto-Nav: comments after URL changes during a run. |
channel |
null |
Installed browser channel such as chrome or msedge. |
executable_path |
null |
Absolute path to a custom executable such as Electron. |
retries |
0 |
Retry failed hunt files this many times. |
screenshot |
"on-fail" |
Screenshot mode: none, on-fail, or always. |
html_report |
false |
Generate or refresh reports/manul_report.html after the run. Recent CLI invocations within the same report session are merged instead of silently overwriting the last file. |
explain_mode |
false |
Enable DOMScorer explain output. Shows per-channel scoring breakdowns for each resolved element. |
HTML report notes:
- The runtime stores recent report-session state in
reports/manul_report_state.json. - This is what lets separate CLI or Test Explorer invocations accumulate into one
reports/manul_report.htmlduring a recent session window. - The HTML header now shows
Run SessionandMerged invocationsso it is obvious when the file contains more than one invocation.
First hunt file
@context: Smoke test for a login flow
@title: Login Smoke
@var: {email} = admin@example.com
@var: {password} = secret123
STEP 1: Open the app
NAVIGATE to https://example.com/login
VERIFY that 'Sign In' is present
STEP 2: Authenticate
Fill 'Email' field with '{email}'
Fill 'Password' field with '{password}'
Wait for 'Sign In' to be visible
Click the 'Sign In' button
VERIFY that 'Dashboard' is present
DONE.
Run it
manul path/to/login.hunt
Useful commands:
python run_tests.py
manul path/to/hunts/
manul --headless path/to/file.hunt
manul --html-report --screenshot on-fail path/to/hunts/
manul --explain path/to/file.hunt
When --html-report is enabled, repeated runs from VS Code Test Explorer no longer leave only the final hunt in the HTML output. The runtime merges recent invocations into the same report session and labels the report header accordingly.
Runtime Reference
Useful capabilities that get lost when the README is trimmed too aggressively:
OPEN APPplusexecutable_pathlets the same DSL drive Electron apps.@schedule:plusmanul daemonturns a hunt into a built-in monitor or RPA task.@var:,EXTRACT,SET, andCALL PYTHON ... into {var}give you deterministic variable flow without hardcoding runtime values.[SETUP],[TEARDOWN], inlineCALL PYTHON, andmanul_hooks.pycover environment setup, backend calls, and suite-wide orchestration.@custom_controlis the explicit escape hatch when a widget should be handled with raw Playwright instead of generic heuristics.SCAN PAGEandmanul recordaccelerate authoring without replacing the readable DSL with low-level recordings.Wait for "Text" to be visible,Wait for 'Spinner' to disappear, andWait for "Submit" to be hiddengive the DSL a deterministic explicit-wait path backed by Playwrightlocator.wait_for()instead of hardcoded sleeps.
Contextual UI navigation
When identical controls exist multiple times on the page, the DSL can now add a contextual qualifier instead of dropping into brittle selectors.
Click the 'Delete' button NEAR 'John Doe'
Click the 'Login' button ON HEADER
Click the 'Privacy Policy' link ON FOOTER
Click the 'Delete' button INSIDE 'Actions' row with 'John Doe'
NEAR 'Anchor'biases ranking by Euclidean pixel distance to the resolved anchor element.ON HEADERprefers elements in the top 15% of the viewport or insideheader/navancestry.ON FOOTERprefers elements in the bottom 15% of the viewport or insidefooterancestry.INSIDE 'Container' row with 'Text'narrows the search to the resolved row or container subtree before normal action scoring continues.
Explicit waits
Use explicit waits when the DOM is still settling after navigation or after an action triggers async UI updates.
Wait for "Welcome, User" to be visible
Wait for 'Loading...' to disappear
Wait for "Submit" to be hidden
disappear maps to Playwright's hidden state, so the runtime treats hidden and disappear as the same wait target internally.
Strict assertions
Use strict assertions when you need exact element text, exact placeholder attributes, or exact current field values instead of loose presence checks.
Verify "save" button has text "Save me"
Verify "Error message" element has text "Invalid credentials"
Verify 'Login' field has placeholder "Login/Email"
Verify "Search" input has placeholder "Type to search..."
Verify "Email" field has value "captain@manul.com"
Verify "Notes" element has value "treasure map"
Verify "<element_name>" <type> has text "<expected_text>"resolves the element through the normal DOM heuristics, readslocator.inner_text().strip(), and performs strict==comparison.Verify "<element_name>" <type> has placeholder "<expected_placeholder>"resolves the element, reads theplaceholderattribute, and performs strict==comparison.Verify "<element_name>" <type> has value "<expected_value>"resolves the element, reads its current value withinput_value()and avalue-attribute fallback, normalizes missing values to"", and performs strict==comparison.- On mismatch, the runtime raises a readable assertion that includes the resolved element locator plus
ExpectedandActualvalues.
Static variables and hooks
@var: {email} = admin@example.com
@var: {password} = secret123
@script: {db} = scripts.db_helpers
@script: {seed_admin_user} = scripts.db_helpers.seed_admin_user
[SETUP]
PRINT "Preparing demo user for {email}"
CALL PYTHON {seed_admin_user} with args: "{email}" "{password}"
CALL PYTHON {db}.issue_login_token with args: "{email}" into {login_token}
[END SETUP]
STEP 1: Login
NAVIGATE to https://example.com/login
Fill 'Email' field with '{email}'
Fill 'Password' field with '{password}'
Click the 'Sign In' button
VERIFY that 'Dashboard' is present
STEP 2: OTP verification
Click the 'Send OTP' button
CALL PYTHON api_helpers.fetch_otp with args: "{email}" "{login_token}" into {otp}
Fill 'OTP' field with '{otp}'
Click the 'Verify' button
VERIFY that 'Welcome' is present
[TEARDOWN]
PRINT "Cleaning up seeded user for {email}"
CALL PYTHON {db}.clean_database with args: "{email}"
[END TEARDOWN]
- Hook syntax is bracket-only:
[SETUP]/[END SETUP]and[TEARDOWN]/[END TEARDOWN]. PRINT "..."is valid inside hook blocks and resolves{variables}before printing.CALL PYTHON ... with args: ...is optional sugar for positional arguments; plainCALL PYTHON mod.func "arg"still works.@script:lets you declare a file-local alias once and reuse eitherCALL PYTHON {alias}.funcorCALL PYTHON {callable_alias}in hooks and mission steps.@script:must use dotted Python import paths only:scripts.db_helpersorscripts.db_helpers.issue_login_token. Slash paths likescripts/db_helpers.pyare rejected.- File-based helpers resolve from the
.huntdirectory first, then the project root, before falling back to normal imports viasys.path. - If setup fails, the mission is marked as
brokenand the browser steps are skipped. Teardown still runs after the mission whenever setup succeeded.
Supported CALL PYTHON forms:
CALL PYTHON package.module.function
CALL PYTHON package.module.function with args: "arg1" "arg2"
CALL PYTHON package.module.function "arg1" "arg2" into {result}
CALL PYTHON package.module.function into {result}
CALL PYTHON {module_alias}.function
CALL PYTHON {module_alias}.function into {result}
CALL PYTHON {callable_alias}
CALL PYTHON {callable_alias} with args: "arg1" "arg2"
CALL PYTHON {callable_alias} into {result}
Alias examples:
@script: {db} = scripts.db_helpers
@script: {issue_login_token} = scripts.db_helpers.issue_login_token
Tags, scheduler, and execution controls
@tags: smoke, regression
@schedule: every 5 minutes
@import: Login, Logout from lib/auth.hunt
@export: Checkout
manul path/to/hunts/ --tags smoke
manul daemon path/to/hunts/ --headless
manul pack lib/auth --output dist/
manul install dist/auth-1.0.0.huntlib
Shared library support: @import: pulls named STEP blocks from other .hunt files, USE Login expands them inline, and @export: controls which blocks are importable. Package archives (.huntlib) can be packed and installed with manul pack and manul install.
Global lifecycle hooks
from manul_engine import before_all, after_all, GlobalContext
@before_all
def setup(ctx: GlobalContext) -> None:
ctx.variables["BASE_URL"] = "https://staging.example.com"
@after_all
def teardown(ctx: GlobalContext) -> None:
cleanup_test_data()
Testing and Benchmarks
The project is alpha, but it is not undocumented or untested.
python run_tests.pyruns the synthetic and unit suitedemo/tests/*.huntholds integration-style hunts — run withpython demo/run_demo.pydemo/benchmarks/run_benchmarks.pyexercises adversarial fixtures such as dynamic IDs, overlays, nested tables, and custom dropdowns
Representative coverage areas include:
- logical STEP grouping and hierarchical execution
ManulSessionAPI behavior- scheduler parsing
- lifecycle hooks
- scoped variables
- HTML reporting
- iframe routing
- visibility filtering and TreeWalker behavior
- custom controls and lazy control loading
- structured exception hierarchy
- config validation
- import depth guards
Docker CI/CD Runner
ManulEngine ships an alpha-stage headless CI runner image for browser automation pipelines.
docker run --rm --shm-size=1g \
-v $(pwd)/hunts:/workspace/hunts:ro \
-v $(pwd)/reports:/workspace/reports \
ghcr.io/alexbeatnik/manul-engine:0.0.9.27 \
--html-report --screenshot on-fail hunts/
All MANUL_* environment variables work as overrides:
docker run --rm --shm-size=1g \
-e MANUL_WORKERS=4 \
-e MANUL_BROWSER=firefox \
-v $(pwd)/hunts:/workspace/hunts:ro \
-v $(pwd)/reports:/workspace/reports \
ghcr.io/alexbeatnik/manul-engine:0.0.9.27 \
hunts/
The image runs as non-root user manul (UID 1000), includes dumb-init for proper signal handling, and sets --no-sandbox --disable-dev-shm-usage by default. Build with additional browsers via --build-arg BROWSERS="chromium firefox". A docker-compose.yml is included for local development with manul and manul-daemon services.
What's New in v0.0.9.27
- What-If Analysis REPL (
ExplainNextDebugger): Interactive debug REPL for hypothetical step evaluation. During a debug pause, typew(terminal) to enter the REPL ore/ sendexplain-next(extension protocol) for one-shot evaluation. Combines DOMScorer heuristic scoring with optional LLM analysis to produce a 0–10 confidence rating, element match info, risk assessment, and corrective suggestions. The best heuristic match is highlighted with a persistent magenta outline on the live page. REPL commands:!execute,!history,!context,!quit. Extension protocol:explain-nextemits\x00MANUL_EXPLAIN_NEXT\x00{json}marker with serializedWhatIfResult; accepts optionalexplain-next {"step":"..."}for overridden step text. New module:explain_next.pywithPageContext,WhatIfResult,ExplainNextDebuggerclasses. 112-assertion test suite (test_53_explain_next.py). - What-If execute bug fixes:
_execute_step()recursive call now passesstrategic_contextandstep_idxby keyword (was misordered as positional args). Injected What-If steps incore.pynow run throughsubstitute_memory()so{var}placeholders are resolved before execution. - LLM JSON fence-stripping:
_parse_llm_json()inllm.pynow strips markdown code fences before JSON parsing, improving robustness with models that wrap JSON responses in triple-backtick blocks.
v0.0.9.26
EngineConfigfrozen dataclass: Newconfig.pymodule with injectableEngineConfigreplacing module-level globals.ManulEngine.__init__accepts an optionalconfigparameter; all runtime settings (timeouts, AI, auto-annotate) are stored as instance attributes.validate()method checks configuration invariants.- Structured exception hierarchy: New
exceptions.pywithManulEngineErrorbase and 7 concrete subclasses (ConfigurationError,ElementResolutionError,HookExecutionError,HuntImportError,VerificationError,SessionError,ScheduleError). All re-exported frommanul_engine. - Thread safety: Registry and module-cache access guarded by locks in
controls.pyandhooks.py. - Scoring early exit:
DOMScorer.score_all()can short-circuit when threshold is exceeded, reducing scoring time on large DOMs. - Import depth guard: Recursive
.huntimports capped at depth 10. - CI quality gates: Ruff lint + format check workflow; lint gate in release pipeline; Dependabot for automated dependency updates.
run_mission()decomposition: Extracted_launch_browser()and_parse_task()from the 400-linerun_mission()method for testability and readability.- Demo directory restructure: All integration hunts, scripts, controls, benchmarks, and pages.json moved to
demo/. Newdemo/run_demo.pyrunner script. Synthetic test suite extracted to standalonerun_tests.py. - Security hygiene: Eliminated false-positive "shell access" alert from package security scanners (socket.dev).
License
Version: 0.0.9.27
Apache-2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file manul_engine-0.0.9.27.tar.gz.
File metadata
- Download URL: manul_engine-0.0.9.27.tar.gz
- Upload date:
- Size: 185.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a84d10cff9d8ee8ecc44a496326772f637c0bcf2a3f5d95ef275c0be5bf4bda
|
|
| MD5 |
3ebc229e217ff64cd8fe3dec14427dde
|
|
| BLAKE2b-256 |
7ca764b8fe656543ab79d9e476fe26ea68cc20739b9c27737d5f7186bc2a04c1
|
Provenance
The following attestation bundles were made for manul_engine-0.0.9.27.tar.gz:
Publisher:
release.yml on alexbeatnik/ManulEngine
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
manul_engine-0.0.9.27.tar.gz -
Subject digest:
1a84d10cff9d8ee8ecc44a496326772f637c0bcf2a3f5d95ef275c0be5bf4bda - Sigstore transparency entry: 1246218721
- Sigstore integration time:
-
Permalink:
alexbeatnik/ManulEngine@d17e1e33aaec7774113938f0ed549f90f4cda408 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/alexbeatnik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d17e1e33aaec7774113938f0ed549f90f4cda408 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file manul_engine-0.0.9.27-py3-none-any.whl.
File metadata
- Download URL: manul_engine-0.0.9.27-py3-none-any.whl
- Upload date:
- Size: 177.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36875126e175e40a20b603118744a7384f11f20d619309ce3d1ca8e83c7fe012
|
|
| MD5 |
2834a5c20098c43a7e3a8f6908ecbbde
|
|
| BLAKE2b-256 |
64033441186bbc4d10303f705054519df1d276692f2d0524ada5647ef424d951
|
Provenance
The following attestation bundles were made for manul_engine-0.0.9.27-py3-none-any.whl:
Publisher:
release.yml on alexbeatnik/ManulEngine
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
manul_engine-0.0.9.27-py3-none-any.whl -
Subject digest:
36875126e175e40a20b603118744a7384f11f20d619309ce3d1ca8e83c7fe012 - Sigstore transparency entry: 1246218722
- Sigstore integration time:
-
Permalink:
alexbeatnik/ManulEngine@d17e1e33aaec7774113938f0ed549f90f4cda408 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/alexbeatnik
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d17e1e33aaec7774113938f0ed549f90f4cda408 -
Trigger Event:
workflow_dispatch
-
Statement type: