Resilient neuro-symbolic browser automation framework powered by Playwright and local LLMs (Ollama)
Project description
๐ผ ManulEngine โ The Mastermind
ManulEngine is a relentless hybrid (neuro-symbolic) framework for browser automation and E2E testing.
Forget brittle CSS/XPath locators that break on every UI update โ write tests in plain English. Stop paying for expensive cloud APIs โ leverage local micro-LLMs via Ollama, entirely on your machine.
Manul combines the blazing speed of Playwright, 20+ JavaScript DOM heuristics, and the reasoning of local neural networks. It is fast, private, and highly resilient to UI changes.
The Manul goes hunting and never returns without its prey.
ManulEngine runs on a potato. No GPU. No cloud APIs. No $0.02 per click. Just Playwright, heuristics, and optional tiny local models.
๐ What's New in v0.0.9.0 โ The Power User Update
VERIFY ... is ENABLED: State verification now supports bothENABLEDandDISABLEDchecks. Assert that interactive elements are truly active before attempting actions โVERIFY that 'Submit' is ENABLED.CALL PYTHONwith Arguments: Hook functions and inlineCALL PYTHONsteps now accept positional arguments โ static strings, unquoted tokens, and{var}placeholders resolved at runtime.CALL PYTHON helpers.multiply "6" "7" into {product}. Arguments are tokenised withshlex.split().- Interactive HTML Reporter: Control panel with "Show Only Failed" checkbox and tag filter chips. All
@tagsfrom executed hunt files are auto-collected and rendered as clickable chips for instant filtering. Missions carrydata-statusanddata-tagsattributes โ all powered by inline Vanilla JS with zero external dependencies. - Dual Persona Workflow: QA writes plain English. SDETs write Python hooks that now accept dynamic arguments (
{variables}) directly from the.huntfile โ no code changes needed on the QA side when backend logic evolves.
Previous highlights
- Normalised Heuristic Scoring (DOMScorer): The scoring engine now uses
0.0โ1.0float arithmetic under the hood. Five weighted channels โcache(2.0),semantics(0.60),text(0.45),attributes(0.25),proximity(0.10) โ are combined via aWEIGHTSdict and multiplied bySCALE=177,778to produce the final integer score. Exactdata-qamatch is the single strongest heuristic signal (+1.0 text). Penalties are clean multipliers: disabled ร0.0, hidden ร0.1. - TreeWalker-Based DOM Scanner:
SNAPSHOT_JSno longer callsquerySelectorAllโ it walks the DOM with a nativeTreeWalkerand aPRUNEset (SCRIPT, STYLE, SVG, NOSCRIPT, TEMPLATE, META, PATH, G, BR, HR) that rejects entire subtrees in one hop. Visibility is checked via the zero-layout-thrashcheckVisibility()API with automaticoffsetWidth/offsetHeightfallback. Hidden file/checkbox/radio inputs are preserved as special exceptions. - Safe iframe Support:
_snapshot()iteratespage.frames, injectsSNAPSHOT_JSinto each same-origin frame, and tags every returned element withframe_index._frame_for(page, el)routes all downstreamlocator()andevaluate()calls to the correct PlaywrightFrame. Cross-origin and detached frames are silently skipped with retry logic. - Clean, Unnumbered DSL: Scripts now read exactly like plain English (
NAVIGATE to urlinstead of1. NAVIGATE to url). - Logical STEP Grouping:
STEP [optional number]: [Description]metadata blocks map manual QA cases directly into.huntfiles. - Interactive Enterprise HTML Reporter: Dual-mode, zero-dependency reporter with native HTML5 accordions, auto-expanding failures, Flexbox layout, "Show Only Failed" toggle, and tag-based filtering chips โ all powered by inline Vanilla JS with zero external dependencies.
- Global Lifecycle Hooks:
@before_all,@after_all,@before_group,@after_grouporchestrate DB seeding and auth.ctx.variablesserialise across parallel--workers.
โจ Key Features
โก Heuristics-First Architecture
95% of the heavy lifting (element finding, assertions, DOM parsing) is handled by ultra-fast JavaScript and Python heuristics. The AI steps in only when genuine ambiguity arises.
The scoring engine (DOMScorer) uses normalised 0.0โ1.0 floats across five weighted channels โ cache, semantics, text, attributes, proximity โ combined via a WEIGHTS dict and scaled to integer thresholds. Exact data-qa match (+1.0) is the single strongest signal; disabled elements are crushed by a ร0.0 multiplier.
When the LLM picker is used, Manul passes the heuristic score as a prior (hint) by default โ the model can override the ranking only with a clear, disqualifying reason.
๐ก๏ธ Ironclad JS Fallbacks
Modern websites love to hide elements behind invisible overlays, custom dropdowns, and zero-pixel traps. Manul uses Playwright with force=True plus retries and self-healing; for Shadow DOM elements it falls back to direct JS helpers to keep execution moving.
๐ง Deep Accessibility Heuristics
Manul scores elements using 20+ signals including aria-label, placeholder, name, data-qa, html_id, semantic input type, and contextual section headings. This means it handles modern single-page apps (React, Vue, Angular) and complex design systems (like Wikipedia's Vector 2022 / Codex skin) without any tuning โ accessibility attributes are treated as first-class identifiers.
๐ Shadow DOM & iframe Awareness
The DOM snapshotter recursively walks shadow roots via TreeWalker and scans same-origin iframes by iterating page.frames. Each element carries a frame_index that routes all downstream actions to the correct Playwright Frame. Cross-origin frames are silently skipped.
๐ป Smart Anti-Phantom Guard & AI Rejection
Strict protection against LLM hallucinations. If the model is unsure, it returns {"id": null}; the engine treats that as a rejection and retries with self-healing.
๐๏ธ Adjustable AI Threshold
Control how aggressively Manul falls back to the local LLM via manul_engine_configuration.json (ai_threshold key) or the MANUL_AI_THRESHOLD environment variable. If not set, Manul auto-calculates it from the model size:
| Model size | Auto threshold |
|---|---|
< 1b |
500 |
1b โ 4b |
750 |
5b โ 9b |
1000 |
10b โ 19b |
1500 |
20b+ |
2000 |
Set MANUL_AI_THRESHOLD=0 to disable the LLM entirely and run fully on deterministic heuristics.
๐๏ธ Persistent Controls Cache
Successful element resolutions are stored per-site and reused on subsequent runs โ making repeated test flows dramatically faster.
๐ Automatic Retries โ Tame Flaky Tests
Real-world E2E tests flake. Network hiccups, slow renders, third-party scripts โ you name it. ManulEngine lets you retry failed hunts automatically:
manul tests/ --retries 2 # retry each failed hunt up to 2 times
manul tests/ --retries 3 --html-report # retry + generate an HTML report
Or set "retries": 2 in manul_engine_configuration.json for a permanent default. Each retry is a full fresh run โ no stale state carried over.
๐ Interactive Enterprise HTML Reporter
One flag. One self-contained HTML file. Dark-themed dashboard with pass/fail stats, native HTML5 <details> step accordions, inline base64 screenshots, and XSS-safe output โ zero external dependencies, zero CDN, zero server.
Enterprise Upgrades:
- Dual-Mode Rendering: If
STEPblocks are used, steps are grouped into logical Accordions. Passing steps collapse by default; failing steps auto-expand to show exactly what broke. - Flexbox Layout: Dropped clunky tables for a sleek Flexbox design ensuring perfect text alignment and zero text mashing.
- "Show Only Failed" Toggle: A control-panel checkbox instantly hides all passing tests โ zero-click triage for large suites.
- Tag Filter Chips: All
@tagsfrom executed hunt files are collected and rendered as clickable chips. Click a tag to show only matching tests โ perfect for filtering bysmoke,regression,login, etc.
manul tests/ --html-report # report saved to reports/manul_report.html
manul tests/ --screenshot always --html-report # embed a screenshot for every step
manul tests/ --screenshot on-fail --html-report # screenshots only on failures
All artifacts (logs, reports) are saved to the reports/ directory โ your workspace stays clean.
Note: Per-step details (accordion + embedded screenshots) require
--workers 1(the default). When--workers > 1, the report aggregates per-hunt results only.
๐ STEP Groups โ Manual Test Cases Meet Automation
ManulEngine bridges the gap between manual QA test cases ("Steps & Expected Results") and automation. Use STEP N: Description headers to mirror the structure of your manual test plan directly in the .hunt file. The engine renders each group as an accordion section in the HTML report โ with its own pass/fail badge and action count โ so stakeholders can read results without decoding raw step indices.
STEP 1: Login
NAVIGATE to https://myapp.com/login
Fill 'Email' with '{email}'
Fill 'Password' with '{password}'
Click 'Sign In' button
VERIFY that 'Dashboard' is present.
STEP 2: Add item to cart
Click 'Add to cart' near 'Laptop Pro'
NAVIGATE to https://myapp.com/cart
VERIFY that 'Laptop Pro' is present.
STEP headers produce zero browser actions โ they are pure metadata. The STEP N: tag is optional but highly recommended: it maps 1:1 to manual QA test cases and gives the HTML report its accordion structure. Action lines that follow must be written as plain text without leading numbers โ never prefix with 1., 2., etc.
๐๏ธ Custom Controls โ Escape Hatch for Complex UI
Some UI elements defeat general-purpose heuristics entirely: React virtual tables, canvas-based date-pickers, WebGL widgets, drag-to-sort lists. Custom Controls let you write plain English in the hunt file while an SDET handles the underlying Playwright logic in Python.
- For Manual QA / Testers: Keep writing plain English steps. If a step targets a Custom Control, the engine routes it to a Python handler automatically. The
.huntfile stays readable and unchanged. - For SDETs / Developers: Register a handler with a one-line decorator tied to a page name from
pages.json. Use any Playwright API inside โ no heuristics, no AI involvement.
# controls/booking.py
from manul_engine import custom_control
@custom_control(page="Checkout Page", target="React Datepicker")
async def handle_datepicker(page, action_type, value):
await page.locator(".react-datepicker__input-container input").fill(value or "")
# tests/checkout.hunt โ no change needed for the QA author
Fill 'React Datepicker' with '2026-12-25'
The engine loads every .py file in controls/ at startup. No configuration required.
See it in action:
controls/demo_custom.pyis a fully-working reference handler for a React Datepicker (with month navigation).tests/demo_controls.huntis the companion hunt file โ run it as-is to see the routing in action.
๐ Static Variables โ Clean Test Data, Zero Hardcoding
Declare all test data at the top of your .hunt file with @var:. Values are injected into the engineโs memory before step 1 runs and can be referenced anywhere via {placeholder} โ keeping your test logic clean and your data in one place.
@var: {email} = admin@example.com
@var: {password} = secret123
STEP 1: Login
NAVIGATE to https://myapp.com/login
Fill 'Email' with '{email}'
Fill 'Password' with '{password}'
Click the 'Sign In' button
VERIFY that 'Dashboard' is present.
Both @var: {key} = value and @var: key = value are accepted. Variables declared with @var: work identically to those created by EXTRACT and CALL PYTHON ... into {var}.
๐ท๏ธ Tags โ Run Exactly What You Need
Tag any .hunt file and cherry-pick which tests to run โ no directory juggling required.
@tags: smoke, auth, regression
NAVIGATE to https://example.com/login
DONE.
manul tests/ --tags smoke # run only 'smoke'-tagged files
manul tests/ --tags smoke,critical # OR logic โ either tag matches
Files without @tags: are excluded when --tags is active. Zero config, zero complexity.
โก Lightning-Fast Preconditions with Python Hooks
Stop wasting hours on brittle UI-based preconditions. With [SETUP] and [TEARDOWN] hooks you can inject test data directly into your database or call an API in pure Python โ keeping your .hunt files crystal clear and your test runs dramatically faster.
@var: {email} = admin@example.com
@var: {password} = secret
[SETUP]
CALL PYTHON db_helpers.seed_admin_user
[END SETUP]
STEP 1: Login
NAVIGATE to https://myapp.com/login
Fill 'Email' field with '{email}'
Fill 'Password' field with '{password}'
Click the 'Sign In' button
VERIFY that 'Dashboard' is present.
[TEARDOWN]
CALL PYTHON db_helpers.clean_database
[END TEARDOWN]
Hooks run outside the browser: [SETUP] fires before the browser opens; [TEARDOWN] fires in a finally block โ always โ regardless of whether the test passed or failed. If setup fails, the mission is skipped and teardown is not called (there's nothing to clean up).
| Block | When it runs | Abort behaviour |
|---|---|---|
[SETUP] |
Before the browser launches | Failure skips mission + teardown |
[TEARDOWN] |
After the mission (pass or fail) | Failure is logged, does not override mission result |
The helper module is resolved relative to the .hunt file's directory first, then the CWD, then standard sys.path โ no configuration needed.
๐ Inline Python Calls
Need to fetch an OTP from the database mid-test? Or trigger a backend job before clicking "Refresh"? Call Python functions directly as action lines right in the middle of your UI flow.
STEP 2: OTP verification
Fill 'Email' field with 'test@manul.com'
Click the 'Send OTP' button
CALL PYTHON api_helpers.fetch_and_set_otp
Fill 'OTP' field with '{otp}'
Click the 'Login' button
VERIFY that 'Dashboard' is present.
The same module resolution rules apply as for [SETUP]/[TEARDOWN]: hunt file directory โ CWD โ sys.path. Functions must be synchronous. If the call fails, the mission stops immediately โ just like any other failed step. No special syntax or block wrapping required.
Passing arguments to Python functions
CALL PYTHON now accepts optional positional arguments โ static strings, unquoted tokens, and {var} placeholders resolved from the engineโs runtime memory:
CALL PYTHON helpers.multiply "6" "7" into {product}
CALL PYTHON api.send_email "{user_email}" "Welcome!"
CALL PYTHON utils.concat 'a' 'b' 'c' into {result}
Arguments are tokenised with shlex.split() โ single-quoted, double-quoted, and unquoted tokens are all accepted. {var} placeholders inside arguments are resolved from the engineโs runtime memory; unresolved placeholders are kept as-is.
Capturing return values with into {var}
Append into {var_name} (or to {var_name}) to bind the functionโs return value directly into an in-mission variable:
CALL PYTHON api_helpers.fetch_otp into {dynamic_otp}
Fill 'Security Code' field with '{dynamic_otp}'
Combine arguments and capture in one line:
CALL PYTHON api_helpers.fetch_otp "{email}" into {otp}
Fill 'OTP' field with '{otp}'
The raw return value is converted to a string (str(return_value)) and stored under the variable name. It is then available for {placeholder} substitution in every subsequent step, exactly like variables populated by EXTRACT or @var:.
๐ Global Lifecycle Hooks โ Enterprise-Scale Test Orchestration
For multi-file test suites that need shared state โ a global auth token, a seeded database, a per-run environment flag โ create a manul_hooks.py file in the same directory as your .hunt files. The engine discovers and loads it automatically.
# tests/manul_hooks.py
from manul_engine import before_all, after_all, before_group, after_group, GlobalContext
@before_all
def global_setup(ctx: GlobalContext) -> None:
"""Runs once before any hunt file starts."""
ctx.variables["BASE_URL"] = "https://staging.example.com"
ctx.variables["API_TOKEN"] = fetch_token_from_vault()
@after_all
def global_teardown(ctx: GlobalContext) -> None:
"""Always runs after all hunt files finish, pass or fail."""
db.rollback_all_test_data()
@before_group(tag="smoke")
def seed_smoke(ctx: GlobalContext) -> None:
"""Runs before every hunt file tagged @tags: smoke."""
ctx.variables["ORDER_ID"] = db.create_temp_order()
@after_group(tag="smoke")
def clean_smoke(ctx: GlobalContext) -> None:
ctx.variables.pop("ORDER_ID", None)
Variables written to ctx.variables are injected into every matching mission as {placeholder}-ready data โ identical to @var: declarations, but shared across all hunt files:
# tests/checkout.hunt
@tags: smoke
STEP 1: Checkout
NAVIGATE to '{BASE_URL}/checkout'
Fill 'API Token' field with '{API_TOKEN}'
DONE.
Hook execution order and failure semantics
| Hook | When it fires | Failure behaviour |
|---|---|---|
@before_all |
Once before the first hunt file | Aborts the entire suite; @after_all still runs |
@after_all |
Once after all hunts finish | Always runs; failure logged, does not override suite result |
@before_group(tag=) |
Before each hunt file whose @tags: contains tag |
Failure skips that mission; @after_group still runs for it |
@after_group(tag=) |
After each matching mission (pass or fail) | Always runs; failure logged, does not override mission result |
Parallel workers
When running with --workers N, @before_all runs in the orchestrator process and its ctx.variables are serialised as JSON into the MANUL_GLOBAL_VARS environment variable before worker subprocesses are spawned. Each worker deserialises them at startup โ {placeholder} substitution works identically in parallel and sequential modes.
Rule for adding pre-test setup: If a test scenario requires a database record, a seeded user, a valid auth token, or any per-suite environment state, always use
@before_allor@before_groupinmanul_hooks.py. Never add setup steps to individual.huntfiles โ they are slow, brittle, and couple production UI flows to test infrastructure.
๐ป System Requirements
| Minimum | Recommended | |
|---|---|---|
| CPU | any | modern laptop |
| RAM | 4 GB | 8 GB |
| GPU | none | none |
| Model | โ (heuristics-only) | qwen2.5:0.5b |
๐ ๏ธ Installation
pip install manul-engine
playwright install chromium
Optional: Local LLM (Ollama)
Ollama is only needed for AI element-picker fallback or free-text mission planning.
pip install ollama # Python client library
ollama pull qwen2.5:0.5b # download model (requires Ollama app: https://ollama.com)
ollama serve
๐ Quick Start
1. Create a hunt file
my_tests/smoke.hunt
@context: Demo smoke test
@title: smoke
@tags: smoke
@var: {name} = Ghost Manul
STEP 1: Fill text box form
NAVIGATE to https://demoqa.com/text-box
Fill 'Full Name' field with '{name}'
Click the 'Submit' button
VERIFY that '{name}' is present.
DONE.
2. Run it
# Run a specific hunt file
manul my_tests/smoke.hunt
# Run all *.hunt files in a folder
manul my_tests/
# Run headless
manul my_tests/ --headless
# Choose a different browser
manul my_tests/ --browser firefox
manul my_tests/ --headless --browser webkit
# Run an inline one-liner
manul "NAVIGATE to https://example.com Click the 'More' link DONE."
# Run multiple hunt files in parallel (4 concurrent browsers)
manul my_tests/ --workers 4
# Run only files tagged 'smoke'
manul my_tests/ --tags smoke
# Run only files tagged 'smoke' OR 'critical'
manul my_tests/ --tags smoke,critical
# Retry failed hunts up to 2 times
manul my_tests/ --retries 2
# Generate a standalone HTML report (saved to reports/manul_report.html)
manul my_tests/ --html-report
# Screenshots on failure + HTML report + retries (the full CI combo)
manul my_tests/ --retries 2 --screenshot on-fail --html-report
# Screenshots for every step (detailed forensic report)
manul my_tests/ --screenshot always --html-report
# Interactive debug mode (terminal) โ pause before every step, confirm in terminal
manul --debug my_tests/smoke.hunt
# VS Code: place red-dot gutter breakpoints in any .hunt file, then run the Debug profile
# in Test Explorer โ โญ Next Step / โถ Continue All / โ Stop (Stop dismisses QuickPick cleanly)
# Smart Page Scanner โ scan a URL and generate a draft hunt file
manul scan https://example.com # outputs to tests/draft.hunt (tests_home)
manul scan https://example.com tests/my.hunt # explicit output file
manul scan https://example.com --headless # headless scan
VS Code: The Step Builder sidebar includes a Live Page Scanner โ paste a URL and click ๐ Run Scan to invoke the scanner without opening a terminal. The generated
draft.huntopens automatically in the editor.
3. Python API
import asyncio
from manul_engine import ManulEngine
async def main():
manul = ManulEngine(headless=True)
await manul.run_mission("""
STEP 1: Fill text box form
NAVIGATE to https://demoqa.com/text-box
Fill 'Full Name' field with 'Ghost Manul'
Click the 'Submit' button
VERIFY that 'Ghost Manul' is present.
DONE.
""")
asyncio.run(main())
๐ Hunt File Format
Hunt files are plain-text test scenarios with a .hunt extension.
Headers (optional)
@context: Strategic context passed to the LLM planner
@title: short-tag
@tags: smoke, auth, regression
@tags: declares a comma-separated list of arbitrary tag names. Use manul --tags smoke tests/ to run only files whose @tags: header contains at least one matching tag. Untagged files are excluded when --tags is active.
Comments
Lines starting with # are ignored.
System Keywords
| Keyword | Description |
|---|---|
NAVIGATE to [URL] |
Load a URL and wait for DOM settlement |
WAIT [seconds] |
Hard sleep |
PRESS ENTER |
Press Enter on the currently focused element (submit forms after filling a field) |
PRESS [Key] |
Press any key or combination globally (e.g. PRESS Escape, PRESS Control+A) |
PRESS [Key] on [Target] |
Press a key on a specific element (e.g. PRESS ArrowDown on 'Search Input') |
RIGHT CLICK [Target] |
Right-click an element to open a context menu |
UPLOAD 'File' to 'Target' |
Upload a file to a file input element (both must be quoted; path relative to .hunt file or CWD) |
SCROLL DOWN |
Scroll the main page down one viewport |
EXTRACT [target] into {var} |
Extract text into a memory variable |
VERIFY that [target] is present |
Assert text/element is visible |
VERIFY that [target] is NOT present |
Assert absence |
VERIFY that [target] is DISABLED |
Assert element is disabled |
VERIFY that [target] is ENABLED |
Assert element is enabled / interactable |
VERIFY that [target] is checked |
Assert checkbox state |
SCAN PAGE |
Scan the current page for interactive elements and print a draft .hunt to the console |
SCAN PAGE into {filename} |
Same, and also write the draft to {filename} (default: tests_home/draft.hunt) |
DONE. |
End the mission |
Python Hooks & Inline Python Calls
Optional [SETUP]/[TEARDOWN] blocks (placed at the top/bottom of the file) and inline CALL PYTHON steps (used anywhere in the numbered sequence) all share the same execution model.
[SETUP]
# Lines starting with # are ignored.
CALL PYTHON <module_path>.<function_name>
[END SETUP]
STEP 1: Authenticate
NAVIGATE to https://myapp.com
CALL PYTHON api_helpers.fetch_otp into {dynamic_otp}
Fill 'Security Code' with '{dynamic_otp}'
VERIFY that 'Dashboard' is present.
[TEARDOWN]
CALL PYTHON <module_path>.<function_name>
[END TEARDOWN]
Rules:
- Functions must be synchronous (async functions are explicitly rejected).
- A single
[SETUP]/[TEARDOWN]block may contain multipleCALL PYTHONlines; they run sequentially โ first failure stops the block. - An inline
CALL PYTHONstep that fails stops the mission immediately, just like any other failed step. - Append
into {var_name}(orto {var_name}) to aCALL PYTHONstep to bind the function's return value into a variable:CALL PYTHON api.fetch_otp into {otp}. The value is converted to a string and available for{placeholder}substitution in all subsequent steps. - Pass positional arguments after the dotted function name:
CALL PYTHON helpers.multiply "6" "7" into {product}. Arguments are tokenised withshlex.split()and{var}placeholders are resolved from runtime memory. - The module is searched in: hunt file directory โ CWD โ
sys.path. No import configuration needed.
Interaction Steps
# Clicking
Click the 'Login' button
DOUBLE CLICK the 'Image'
# Typing
Fill 'Email' field with 'test@example.com'
Type 'hello' into the 'Search' field
# Dropdowns
Select 'Option A' from the 'Language' dropdown
# Checkboxes / Radios
Check the checkbox for 'Terms'
Uncheck the checkbox for 'Newsletter'
Click the radio button for 'Male'
# Hover & Drag
HOVER over the 'Menu'
Drag the element "Item" and drop it into "Box"
# Optional steps (non-blocking)
Click 'Close Ad' if exists
Variables
EXTRACT the price of 'Laptop' into {price}
VERIFY that '{price}' is present.
Variable Declaration
Declare static test data at the top of the file using @var:. These values are pre-populated into the runtime memory before any step runs and can be interpolated anywhere a variable placeholder {name} is accepted.
@var: {email} = admin@example.com
@var: {password} = secret123
STEP 1: Login
NAVIGATE to https://myapp.com/login
Fill 'Email' with '{email}'
Fill 'Password' with '{password}'
Click the 'Login' button
The surrounding {} braces in the declaration are optional โ @var: email = ... and @var: {email} = ... are equivalent. Values are stripped of leading/trailing whitespace. Declared variables behave exactly like variables populated by EXTRACT and can be used interchangeably with them in downstream steps.
๐ค Generate Hunt Files with AI Prompts
The prompts/ directory contains ready-to-use LLM prompt templates that let you generate complete .hunt test files automatically โ no manual step writing needed.
| Prompt file | When to use |
|---|---|
prompts/html_to_hunt.md |
Paste a page's HTML source โ get complete hunt steps |
prompts/description_to_hunt.md |
Describe a page or flow in plain text โ get hunt steps |
Quick example โ GitHub Copilot Chat
- Open Copilot Chat (
Ctrl+Alt+I). - Click the paperclip icon โ attach
prompts/html_to_hunt.md. - Paste your HTML in the chat and press Enter.
- Save the response as
tests/<name>.huntand runmanul tests/<name>.hunt.
See prompts/README.md for usage with ChatGPT, Claude, OpenAI/Anthropic API, and local Ollama.
โ๏ธ Configuration
Create manul_engine_configuration.json in your project root โ all settings are optional:
{
"model": "qwen2.5:0.5b",
"headless": false,
"browser": "chromium",
"browser_args": [],
"timeout": 5000,
"nav_timeout": 30000,
"ai_always": false,
"ai_policy": "prior",
"ai_threshold": null,
"controls_cache_enabled": true,
"controls_cache_dir": "cache",
"semantic_cache_enabled": true,
"log_name_maxlen": 0,
"log_thought_maxlen": 0,
"workers": 1,
"tests_home": "tests",
"auto_annotate": false,
"retries": 0,
"screenshot": "on-fail",
"html_report": false
}
Set
"model": null(or omit it) to disable AI entirely and run in heuristics-only mode.
Environment variables (MANUL_*) always override JSON values โ useful for CI/CD:
export MANUL_HEADLESS=true
export MANUL_AI_THRESHOLD=0
export MANUL_MODEL=qwen2.5:0.5b
export MANUL_BROWSER=firefox
export MANUL_BROWSER_ARGS="--disable-gpu,--lang=uk"
| Key | Default | Description |
|---|---|---|
model |
null |
Ollama model name. null = heuristics-only (no AI) |
headless |
false |
Hide browser window |
browser |
"chromium" |
Browser engine: chromium, firefox, or webkit |
browser_args |
[] |
Extra launch flags for the browser (array of strings) |
ai_threshold |
auto | Score threshold before LLM fallback. null = auto by model size |
ai_always |
false |
Always use LLM picker, bypass heuristic short-circuits |
ai_policy |
"prior" |
"prior" (LLM may override score) or "strict" (enforce max-score) |
controls_cache_enabled |
true |
Persistent per-site controls cache (file-based, survives between runs) |
controls_cache_dir |
"cache" |
Cache directory (relative to CWD or absolute) |
semantic_cache_enabled |
true |
In-session semantic cache; remembers resolved elements within a single run (+200,000 score boost) |
timeout |
5000 |
Default action timeout (ms) |
nav_timeout |
30000 |
Navigation timeout (ms) |
log_name_maxlen |
0 |
Truncate element names in logs (0 = no limit) |
log_thought_maxlen |
0 |
Truncate LLM thoughts in logs (0 = no limit) |
workers |
1 |
Number of hunt files to run concurrently (each gets its own browser) |
tests_home |
"tests" |
Default directory for new hunt files and SCAN PAGE / manul scan output |
auto_annotate |
false |
Automatically insert # ๐ Auto-Nav: comments in hunt files whenever the browser URL changes (not only on NAVIGATE steps). Page names are resolved from pages.json; unmapped URLs fall back to the full URL |
retries |
0 |
Number of times to retry a failed hunt file before marking it as failed (0 = no retries) |
screenshot |
"on-fail" |
Screenshot capture mode: "none" (no screenshots), "on-fail" (default โ failed steps only), "always" (every step) |
html_report |
false |
Generate a self-contained HTML report after the run (reports/manul_report.html) |
๐ Available Commands
| Category | Command Syntax |
|---|---|
| Navigation | NAVIGATE to [URL] |
| Input | Fill [Field] with [Text], Type [Text] into [Field] |
| Click | Click [Element], DOUBLE CLICK [Element], RIGHT CLICK [Element] |
| Selection | Select [Option] from [Dropdown], Check [Checkbox], Uncheck [Checkbox] |
| Mouse Action | HOVER over [Element], Drag [Element] and drop it into [Target] |
| Data Extraction | EXTRACT [Target] into {variable_name} |
| Verification | VERIFY that [Text] is present/absent, VERIFY that [Element] is checked/disabled/enabled |
| Page Scanner | SCAN PAGE, SCAN PAGE into {filename} |
| Debug | DEBUG / PAUSE โ pause execution at that step (use with --debug or VS Code gutter breakpoints) |
| Keyboard | PRESS ENTER, PRESS [Key], PRESS [Key] on [Element] |
| File Upload | UPLOAD 'File' to 'Element' |
| Flow Control | WAIT [seconds], SCROLL DOWN |
| Finish | DONE. |
Append
if existsoroptionalto any step (outside quoted text) to make it non-blocking, e.g.Click 'Close Ad' if exists
๐พ Battle-Tested
ManulEngine is verified against 1983 synthetic DOM tests across 37 test suites covering:
- Shadow DOM, invisible overlays, zero-pixel honeypots
- Same-origin iframe element routing and cross-frame resolution
- Normalised DOMScorer weighting hierarchy (data-qa > text > attributes)
- TreeWalker PRUNE-set filtering and
checkVisibility()visibility gating - Custom dropdowns, drag-and-drop, hover menus
- Legacy HTML (tables, fieldsets, unlabelled inputs)
- AI rejection & self-healing loops
- Persistent controls cache hit/miss cycles
Version: 0.0.9.0 ยท Status: Hunting...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file manul_engine-0.0.9.0.tar.gz.
File metadata
- Download URL: manul_engine-0.0.9.0.tar.gz
- Upload date:
- Size: 121.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f10b7252e94ea6654c6dba27120bc35abcf3cf0e2b9071e7df38f99277c33c35
|
|
| MD5 |
25a3aca5e8cd98a5ddbcbbf5a29b95b8
|
|
| BLAKE2b-256 |
a895bddda0568ba83fcf7ac51c33d19d38e7cced7609c8bbe410376056e6e637
|
File details
Details for the file manul_engine-0.0.9.0-py3-none-any.whl.
File metadata
- Download URL: manul_engine-0.0.9.0-py3-none-any.whl
- Upload date:
- Size: 104.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d5d415ce306de6a4eb222cbe5094073079f49009fd03a42a158c32d26af477d1
|
|
| MD5 |
2d9bd29e45e8ec4157b82cb436aa3d90
|
|
| BLAKE2b-256 |
513fe7c388eeaa9a2685a6a9844c4056f9b54746141252b794e3f5c8fee94b01
|