The Anal-King of AI Browser Automation - A beautifully fucked-up browser automation script that actually works
Project description
🏴☠️ UZDABRAZOR - The Anal-King of AI Browser Automation 🏴☠️
A beautifully fucked-up Skynet-powered browser automation AI Agent that harnesses neural brainfuck and machine learning chaos to give zero shits about anything while somehow still working perfectly. Smells like smegma but runs like a dream.
🔥 What This Beautiful Disaster Does
uzdabrazor is the most irreverent, crude, and effective neural brainfuck automation script you'll ever encounter. This digital Skynet harnesses machine learning chaos and turns your browser into an unstoppable cybernetic organism. Built on top of the excellent browser-use library, it provides:
- 2 simple neural overlords - Ollama (local/free) + OpenRouter (400+ cloud models with ONE API key)
- Complete Big Brother surveillance - Monitors every single machine learning brainfart like a paranoid NSA cyborg
- Terminator stealth mode - Uses patchright to dodge bot detection like a shapeshifting T-1000
- Organized digital anarchy - Crude language wrapped around Skynet-grade engineering
- Zero corporate Matrix bullshit - No enterprise nonsense, just pure cyberpunk functional chaos
- Simplified as fuck - One local provider, one cloud gateway. No more juggling 9 different API keys.
💻 System Requirements (Don't Skip This Shit)
- Python 3.11+ (anything older is prehistoric garbage)
- Chrome or Chromium (for the actual browser automation, duh)
- Ollama server (if you want free local AI, get it at https://ollama.ai)
- Patchright (for stealth mode:
pip install patchright && patchright install)
🚀 Quick Start (For the Impatient)
# 1. Install the package globally
pipx install uzdabrazor
# 2. Create a .env file in your current working directory
# Download .env.example from the repo or create your own with these variables:
cat > .env << EOF
OPENROUTER_API_KEY=sk-or-v1-your-key-here
OLLAMA_ENDPOINT=http://localhost:11434
EOF
# 3. Run with local ollama (free neural overlord, fuck paying corporate Skynet)
uzdabrazor --task "Go to example.com and tell me the page title"
# 4. Or use OpenRouter for cloud models
uzdabrazor --provider openrouter --model anthropic/claude-3.5-sonnet
# 5. Better yet, copy run.example.sh from the repo and shove it up your asshole somewhere
# Then customize it for your own automation needs
📝 Environment File Location
IMPORTANT: After installing via pipx, place your .env file in the directory where you run the uzdabrazor command.
The script loads environment variables from .env in your current working directory. Example:
# Create .env in your project folder
cd ~/my-automation-project
cat > .env << EOF
OPENROUTER_API_KEY=sk-or-v1-your-actual-key-here
OLLAMA_ENDPOINT=http://localhost:11434
EOF
# Run uzdabrazor from the same directory
uzdabrazor --task "your task here"
Grab .env.example from the GitHub repo for the full list of variables.
🤖 Supported Neural Overlords
Simplified to TWO providers because more options != better code:
| Provider | Description | Example Models |
|---|---|---|
| ollama | Local neural brainfuck (DEFAULT - FREE and PRIVATE) | llama3.1, qwen3, gemma3, etc. |
| openrouter | 400+ cloud models via ONE API key (OpenAI, Anthropic, Google, DeepSeek+) | anthropic/claude-3.5-sonnet, openai/gpt-4-turbo, google/gemini-2.0-flash-exp |
Why only two? Because managing 9 different API keys and endpoints is a clusterfuck. OpenRouter gives you access to literally every major model with one API key and adds only ~10-20% markup. Ollama gives you free local models. Done. Simple. Efficient.
OpenRouter Configuration
OpenRouter uses the standard ChatOpenAI from LangChain with optimized settings:
- Temperature: Set to
0.0for deterministic, consistent browser actions - Max Tokens: Limited to
4096for cost control and response quality - Tool Choice: Set to
autofor proper function calling support - Base URL:
https://openrouter.ai/api/v1
These settings ensure reliable browser automation while keeping costs predictable.
🎯 Usage Examples
Basic Destruction
# Default: ollama (because fuck paying for AI)
uzdabrazor --task "Go to GitHub and find trending repositories"
# Use OpenRouter for Claude
uzdabrazor --provider openrouter --model anthropic/claude-3.5-sonnet --task "Analyze this website"
# Use OpenRouter for GPT-4
uzdabrazor --provider openrouter --model openai/gpt-4-turbo --task "Analyze this website"
Advanced Fuckery
# Headless stealth mode
uzdabrazor --headless --provider openrouter --model anthropic/claude-3.5-sonnet
# Custom browser and window size
uzdabrazor --browser-bin-path /usr/bin/google-chrome-beta --window-width 1920 --window-height 1080
# Connect to existing browser
google-chrome --remote-debugging-port=9222 &
uzdabrazor --cdp-url http://localhost:9222
# Different models for main task vs extraction (cost optimization strategy)
# MAIN LLM: Complex reasoning and decision-making (use powerful models)
# EXTRACTION LLM: Data parsing and text extraction (use fast cheap models)
uzdabrazor --provider openrouter --model anthropic/claude-3.5-sonnet --extraction-model openai/gpt-4o-mini
# Docker mode with no security (because we live dangerously)
uzdabrazor --dockerize --headless --no-security --provider ollama
# Custom output directory and logging
uzdabrazor --history-dir ~/automation-logs --log-level debug
Vision Control
# Disable vision to save tokens (blind destruction is still destruction)
uzdabrazor --no-vision
# Low/high detail vision
uzdabrazor --vision-detail low
uzdabrazor --vision-detail high
🔧 Command Line Arguments
| Flag | Description | Default |
|---|---|---|
--provider |
AI provider to use | ollama |
--model |
Specific model name | llama3.1 |
--extraction-provider |
Separate AI for page extraction (save cash) | Same as --provider |
--extraction-model |
Model for extraction tasks | Same as --model |
--task |
Task for the AI to perform | Stealth test |
--headless |
Invisible browser mode | false |
--no-stealth |
Disable stealth (live dangerously) | false |
--no-vision |
Disable AI vision | Vision enabled |
--vision-detail |
Vision detail level (auto/low/high) |
auto |
--window-width |
Browser width | 1920 |
--window-height |
Browser height | 1080 |
--browser-bin-path |
Custom browser executable | None |
--cdp-url |
Connect to existing browser | None |
--browser-profile-dir |
Custom profile directory | None |
--no-security |
Disable security features | false |
--log-level |
Logging verbosity (debug/info/warning...) |
info |
--debug-host |
Debug server host | localhost |
--debug-port |
Debug server port | 9222 |
--dockerize |
Docker-optimized flags | false |
--skip-llm-api-key-verif |
Skip API key validation (for testing/debugging) | false |
--history-dir |
Output directory | /tmp/agent_history |
🕵️ Surveillance Features (Big Brother Is Watching)
This beautiful bastard monkey-patches all LLM providers to log every single AI call. You'll see exactly when your neural overlords are thinking:
What Gets Logged:
- Every
ainvoke()call to any LLM provider - Which model is being used
- How many messages are in the prompt
- What output format is requested
Example Output:
🦙 OLLAMA AINVOKE DETECTED! Model: llama3.1 is spitting some local llama wisdom
📝 Processing 5 messages with output_format: None
🔀 OPENROUTER AINVOKE DETECTED! Model: anthropic/claude-3.5-sonnet is routing through 400+ models like a fucking switchboard
📝 Processing 3 messages with output_format: <class 'ActionResult'>
🦙 OLLAMA AINVOKE DETECTED! Model: qwen3:8b is spitting some local llama wisdom
📝 Processing 12 messages with output_format: None
This shit is useful for:
- Debugging which model is actually being called
- Understanding token usage patterns
- Catching when browser-use makes unexpected AI calls
- Feeling like a paranoid NSA cyborg monitoring everything
📁 Output Files
Each run generates two files in your --history-dir:
uzdabrazor_{timestamp}_{unique_id}.gif- Visual recording of all browser actionsuzdabrazor_{timestamp}_{unique_id}.json- Complete conversation history and task results
Example filenames:
uzdabrazor_20250101_235959_a1b2c3d4.gif
uzdabrazor_20250101_235959_a1b2c3d4.json
🏴☠️ Stealth Mode (Dodging Bot Detection Like a Boss)
Stealth mode uses patchright to patch browser binaries and evade detection:
pip install patchright
patchright install # Downloads patched browsers
What Patchright Does:
- Removes webdriver signals that scream "I'M A BOT!"
- Modifies browser fingerprints
- Spoofs navigator properties
- Bypasses common bot detection techniques
Default: Stealth is ENABLED (use --no-stealth to disable if you're feeling suicidal)
⚙️ Environment Variables
Copy the example file and fill in your fucking API keys:
cp .env.example .env
# Then edit .env with your actual keys
Check .env.example in the repo for the full list of variables. Main ones:
OPENROUTER_API_KEY- OpenRouter API key (for 400+ cloud models: OpenAI, Anthropic, Google, DeepSeek, etc.)OLLAMA_ENDPOINT- Local Ollama server (default: http://localhost:11434)
🐛 Troubleshooting (When Shit Breaks)
Chrome not found:
- Install Chrome or Chromium, you genius
- Or use
--browser-bin-pathto point to your browser
Ollama connection refused:
- Make sure Ollama server is running:
ollama serve - Check the endpoint:
--provider ollamauseshttp://localhost:11434by default
API key errors:
- Check your keys actually work (make test API calls)
- OpenRouter keys start with
sk-or-v1- - Get your OpenRouter key at https://openrouter.ai/keys
Patchright issues:
- Run
patchright installto download patched browsers - Check if you have write permissions
CDP connection fails:
- Make sure browser is running:
google-chrome --remote-debugging-port=9222 - Can't use
--browser-bin-pathAND--cdp-urltogether (pick one, genius)
🔥 Why This Exists
Because browser automation doesn't have to be boring corporate shit.
💬 Final Words
Peen goes in vageen. Code works. End of story.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file uzdabrazor-1.0.5.tar.gz.
File metadata
- Download URL: uzdabrazor-1.0.5.tar.gz
- Upload date:
- Size: 19.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c8a1751a774de84b6f3e6751f9b344fcda3e35106929b433e3db781825111f44
|
|
| MD5 |
11b7f0db6f6e8dfa07187c56d47c8459
|
|
| BLAKE2b-256 |
cbe9f799a88b2b1685fbfa97774cc0d01a06c24ee7652a2439a3403f44124dcc
|
File details
Details for the file uzdabrazor-1.0.5-py3-none-any.whl.
File metadata
- Download URL: uzdabrazor-1.0.5-py3-none-any.whl
- Upload date:
- Size: 20.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74fece9e01fa6b94bf9d93f32818be08c31fbb8727e089bdcb2c9065ae14ea0e
|
|
| MD5 |
5db0845bff472289b16cae703c6fc274
|
|
| BLAKE2b-256 |
064c1664305f1459115eced2c4d548ca1c55b712cb91faae96847467f115152c
|