Self-healing test automation library for Robot Framework with AI-powered locator fixing
Project description
🔄 smart-selfhealing-rbfw
Self-healing test automation powered by AI 🤖
🌟 Features
- 🔧 Automatic Locator Healing - Automatically fixes broken locators in real-time
- 🧠 LLM-Powered Intelligence - Uses GPT, Claude, Gemini, or local LLMs (Ollama) for smart locator generation
- 👁️ Vision Support - Optional screenshot analysis for better accuracy
- 📚 Multi-Library Support - Works with SeleniumLibrary, Browser Library (Playwright), and AppiumLibrary
- 📊 Detailed Reports - Beautiful HTML reports showing all fixed locators
- 🎯 Zero Code Changes - Just add the listener to your test suite
- 🔒 Secure - All API keys stored in environment variables
📦 Installation
pip install smart-selfhealing-rbfw
🚀 Quick Start
1. Basic Setup
Add the SelfHealing library to your Robot Framework test:
*** Settings ***
Library SelfHealing ai_locator_llm=True
2. Configure Environment Variables
Set up your LLM provider (choose one):
Option A: OpenAI (GPT)
export LLM_API_KEY=your-openai-api-key
export LOCATOR_AI_MODEL=gpt-4o-mini
Option B: Google Gemini
export GEMINI_API_KEY=your-gemini-api-key
export LOCATOR_AI_MODEL=gemini/gemini-1.5-flash
Option C: Local Ollama (Free!)
export LLM_API_BASE=http://localhost:11434
export LOCATOR_AI_MODEL=ollama_chat/llama3.1
3. Run Your Tests
robot tests/
That's it! 🎉 The library will automatically heal broken locators and generate a report.
📖 Usage Examples
Example 1: Browser Library (Playwright)
*** Settings ***
Library Browser timeout=5s
Library SelfHealing ai_locator_llm=True
Suite Setup New Browser chromium headless=False
Test Setup New Context viewport={'width': 1280, 'height': 720}
Test Teardown Close Context
Suite Teardown Close Browser
*** Test Cases ***
Login Test
New Page https://example.com/login
Fill Text id=username testuser
Fill Text id=password testpass123
Click id=login-button
Get Text css=.welcome-message *= Welcome
Example 2: SeleniumLibrary
*** Settings ***
Library SeleniumLibrary timeout=5s
Library SelfHealing ai_locator_llm=True
Suite Setup Open Browser https://example.com chrome
Suite Teardown Close All Browsers
*** Test Cases ***
Search Product
Input Text id=search-box laptop
Click Element id=search-button
Wait Until Page Contains Search Results
Example 3: With Vision Support (Recommended for Complex UIs)
*** Settings ***
Library SelfHealing
... ai_locator_llm=True
... ai_locator_visual=True
*** Test Cases ***
Complex UI Test
# Vision mode captures screenshots to help LLM understand the page better
Click Element id=dynamic-button
⚙️ Configuration Options
Library Import Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
ai_locator_llm |
bool | True |
Enable AI-powered locator healing using LLM |
ai_locator_visual |
bool | False |
Enable screenshot analysis (requires vision-capable model) |
ai_locator_database |
bool | False |
Store healed locators in database for reuse |
ai_locator_database_file |
str | "locator_db.json" |
Path to locator database file |
Note: Healing always runs in realtime mode (fixes locators immediately when they fail).
Environment Variables
| Variable | Required | Description | Example |
|---|---|---|---|
LLM_API_KEY |
Yes* | API key for LLM provider (OpenAI, etc.) | sk-... |
GEMINI_API_KEY |
Yes* | API key for Google Gemini | AIza... |
LOCATOR_AI_MODEL |
Yes | Model for generating locators (core healing) | gpt-4o-mini, gemini/gemini-1.5-flash |
LOCATOR_AI_MODEL |
Optional | Model for text analysis (visual healing only) | gpt-4o-mini, gemini/gemini-1.5-flash |
VISUAL_AI_MODEL |
Optional | Model with vision capability (visual healing only) | gpt-4o, gemini/gemini-1.5-pro |
LLM_API_BASE |
No | Custom API endpoint | http://localhost:11434 (Ollama) |
*Either LLM_API_KEY or GEMINI_API_KEY required, depending on provider.
⚠️ Configuration Validation:
- When AI features are enabled, the library validates required environment variables at startup
- Missing critical configurations trigger warnings in the log file and console
LOCATOR_AI_MODELis only required whenai_locator_visual=True
🧠 Supported LLM Providers
This library uses LiteLLM for LLM integration, supporting 100+ providers:
Popular Choices
| Provider | Model Example | Vision Support | Cost |
|---|---|---|---|
| OpenAI | gpt-4o-mini, gpt-4o |
✅ Yes | 💰 Paid |
| Google Gemini | gemini/gemini-1.5-flash, gemini/gemini-1.5-pro |
✅ Yes | 💰 Paid / Free tier |
| Anthropic Claude | claude-3-5-sonnet-20241022 |
✅ Yes | 💰 Paid |
| Ollama (Local) | ollama_chat/llama3.1, ollama_chat/llama3.2-vision |
✅ Yes | 🆓 Free |
See full provider list: https://docs.litellm.ai/docs/providers
📊 Self-Healing Reports
After test execution, open the SELF-HEALING button in your Robot Framework report:
- Fixed Locators - All healed locators with before/after comparison
- Success Rate - Percentage of successful healings
- Recommendations - Suggested permanent fixes for your test code
🔒 Security Best Practices
⚠️ Never commit API keys to version control!
Recommended Setup
- Use environment variables:
export LLM_API_KEY=your-secret-key
- Or use
.envfile:
# .env (add to .gitignore!)
LLM_API_KEY=your-secret-key
LOCATOR_AI_MODEL=gpt-4o-mini
- Load in your test:
*** Settings ***
Library SelfHealing ai_locator_llm=True
🛠️ Troubleshooting
Common Issues
1. "LOCATOR_AI_MODEL not set" Error
Solution: Set the environment variable:
export LOCATOR_AI_MODEL=gpt-4o-mini
2. Authentication Error
Solution: Verify your API key is correct:
export LLM_API_KEY=your-actual-api-key
3. Healing Not Working
Solution: Ensure library is imported with healing enabled:
Library SelfHealing ai_locator_llm=True
4. Vision Mode Fails
Solution: Make sure your model supports vision:
- ✅ Works:
gpt-4o,gemini-1.5-pro,claude-3-5-sonnet - ❌ Won't work:
gpt-3.5-turbo,gemini-flash-latest
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📝 License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
👤 Author
Hieu La
- Email: hieuld@smartosc.com
- Company: SmartOSC
🙏 Acknowledgments
- Robot Framework community
- LiteLLM for unified LLM integration
- All contributors and users
📚 Related Resources
- Robot Framework Documentation
- SeleniumLibrary
- Browser Library (Playwright)
- LiteLLM Documentation
- GitHub Repository
Made with ❤️ by Hieu La
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smart_selfhealing_rbfw-0.1.2.tar.gz.
File metadata
- Download URL: smart_selfhealing_rbfw-0.1.2.tar.gz
- Upload date:
- Size: 56.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.14.0 Darwin/23.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23ec583c2b2f3a5fe8e0cde8197ee60bc292660ff6d40e703c2f50a110060688
|
|
| MD5 |
1fb68d9a9bb548ebabca5da9216b74a0
|
|
| BLAKE2b-256 |
1689d4e2c62e36660680e64a137c1eb5e3b834997d4425a47bc22e209b4983ba
|
File details
Details for the file smart_selfhealing_rbfw-0.1.2-py3-none-any.whl.
File metadata
- Download URL: smart_selfhealing_rbfw-0.1.2-py3-none-any.whl
- Upload date:
- Size: 60.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.14.0 Darwin/23.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74834eac73952ba91742fdca30c9a488165bdf9f981de598afafe6aeed82490b
|
|
| MD5 |
f97f6a5df2b92835e0231ab4fa3904fd
|
|
| BLAKE2b-256 |
dde5aed1e311c29073e9367dd438ec9ba052c13f634a01f47c7311a1de485b5c
|