Skip to main content

No project description provided

Project description

Robot Framework Selfhealing Agents

banner

PyPI version Python versions License

A robotframework library that repairs failing Robot Framework tests automatically using Large Language Models (LLMs). It currently heals broken locators, with upcoming releases expanding to additional common failure modes.

Note: This repository does not collect or store any user data. However, when using LLMs, data privacy cannot be fully guaranteed by this repository alone. If you are working with sensitive information, ensure you use a trusted provider and/or connect to a secure endpoint that enforces data privacy and prevents your data from being used for model training.
The repository is designed to be flexible, allowing you to integrate different providers and/or add custom endpoints as needed.


✨ Features

  • 🧭 Heals broken locators automatically
  • 📂 Supports test suites with external resource files
  • ⏱️ Runtime hooking keeps tests running after locator fixes
  • 📝 Generates reports with healing steps, repaired files and diffs
  • 🤖 LLM multi-agent workflow (extensible for more error types)
  • 🌐 Supports Browser & Selenium (Appium planned)
  • 🔌 Supports OpenAI, Azure OpenAI, LiteLLM and pluggable providers
  • 🧰 RF Library for easy test suite integration
  • 🔍 Monitor your agents with Logfire

⚙️ ️Installation

pip install robotframework-selfhealing-agents

🛠️Setup

To configure the project, create a .env file in the root directory of the repository. This file should contain all required environment variables for your chosen LLM provider. If a different variant is needed because of e.g. a docker container restricting the use of .env, alternatives are a simple file called envfile or passing the variables into the process env variables of the container (for more information, the listener.py includes a _robust_env_load function).

Minimal Example (OpenAI)

For a quick start with the default settings and OpenAI as the provider, your .env file only needs:

OPENAI_API_KEY="your-openai-api-key"

Custom Endpoint

If you need to use a custom endpoint (for example, for compliance or privacy reasons), add the BASE_URL variable:

OPENAI_API_KEY="your-openai-api-key"
BASE_URL="your-endpoint-to-connect-to"

Azure OpenAI Example

To use Azure as your LLM provider, specify the following variables:

AZURE_API_KEY="your-azure-api-key"
AZURE_API_VERSION="your-azure-api-version"
AZURE_ENDPOINT="your-azure-endpoint"

ORCHESTRATOR_AGENT_PROVIDER="azure"
LOCATOR_AGENT_PROVIDER="azure"

LiteLLM Example

To use LiteLLM as your LLM provider, specify the following variables:

LITELLM_API_KEY="your-litellm-api-key"
BASE_URL="your-endpoint-to-connect-to"

ORCHESTRATOR_AGENT_PROVIDER="litellm"
LOCATOR_AGENT_PROVIDER="litellm"

These minimal examples demonstrate how to run the project with different providers using the default settings. For more details on available configuration options (such as selecting a specific model) please refer to the "Configuration" section.


🚀 Usage

After installing the package and adding your necessary parameters to the .env file, simply add the Library SelfhealingAgents to your test suite(s).

*** Settings ***
Library    Browser    timeout=5s
Library    SelfhealingAgents
Suite Setup    New Browser    browser=${BROWSER}    headless=${HEADLESS}
Test Setup    New Context    viewport={'width': 1280, 'height': 720}
Test Teardown    Close Context
Suite Teardown    Close Browser    ALL

*** Variables ***
${BROWSER}    chromium
${HEADLESS}    True

*** Test Cases ***
Login with valid credentials
    New Page    https://automationintesting.com/selenium/testpage/
    Set Browser Timeout    1s
    Fill Text    id=first_name    tom
    Fill Text    id=last_name    smith
    Select Options By    id=usergender    label    Male
    Click    id=red
    Fill Text    id=tell_me_more    More information
    Select Options By    id=user_continent    label    Africa
    Click    id=i_do_nothing

After running your test suite(s), you'll find a "SelfHealingReports" directory in your current working directory containing detailed logs and output reports. There are three types of reports generated:

  1. Action Log: Summarizes all healing steps performed and their locations within your tests
  2. Healed Files: Provides repaired copies of your test suite(s)
  3. Diff Files: Shows a side-by-side comparison of the original and healed files, with differences highlighted for easy review
  4. Summary: A json summary file for a quick overview of number of healing steps and files affected etc.

Action Log

action_log

Healed File

*** Settings ***
Library    Browser    timeout=5s
Library    SelfhealingAgents
Suite Setup    New Browser    browser=${BROWSER}    headless=${HEADLESS}
Test Setup    New Context    viewport={'width': 1280, 'height': 720}
Test Teardown    Close Context
Suite Teardown    Close Browser    ALL

*** Variables ***
${BROWSER}    chromium
${HEADLESS}    True

*** Test Cases ***
Login with valid credentials
    New Page    https://automationintesting.com/selenium/testpage/
    Set Browser Timeout    1s
    Fill Text    css=input[id='firstname']    tom
    Fill Text    css=input[id='surname']    smith
    Select Options By    css=select[id='gender']    label    Male
    Click    id=red
    Fill Text    css=textarea[placeholder='Tell us some fun stuff!']    More information
    Select Options By    css=select#continent    label    Africa
    Click    css=button#submitbutton

Diff File

diff_file

Summary json

{
  "total_healing_events": 6,
  "nr_affected_tests": 1,
  "nr_affected_files": 1,
  "affected_tests": [
    "Login with valid credentials"
  ],
  "affected_files": [
    "ait.robot"
  ]
}

Configuration

Below is an example .env file containing all available parameters:

OPENAI_API_KEY="your-openai-api-key"
LITELLM_API_KEY="your-litellm-api-key"
AZURE_API_KEY="your-azure-api-key"
AZURE_API_VERSION="your-azure-api-version"
AZURE_ENDPOINT="your-azure-endpoint"
BASE_URL="your-base-url"

ENABLE_SELF_HEALING=True
USE_LLM_FOR_LOCATOR_GENERATION=True
MAX_RETRIES=3
REQUEST_LIMIT=5
TOTAL_TOKENS_LIMIT=6000
ORCHESTRATOR_AGENT_PROVIDER="openai"
ORCHESTRATOR_AGENT_MODEL="gpt-4o-mini"
ORCHESTRATOR_AGENT_TEMPERATURE=0.1
LOCATOR_AGENT_PROVIDER="openai"
LOCATOR_AGENT_MODEL="gpt-4o-mini"
LOCATOR_AGNET_TEMPERATURE=0.1
LOCATOR_TYPE="css"
REPORT_DIRECTORY="full-path-for-output-files"
IS_RERUN_ACTIVATED=False

📝 Configuration Parameters

Name Default Required? Description
OPENAI_API_KEY None If using OpenAI Your OpenAI API key
LITELLM_API_KEY None If using LiteLLM Your LiteLLM API key
AZURE_API_KEY None If using Azure Your Azure OpenAI API key
AZURE_API_VERSION None If using Azure Azure OpenAI API version
AZURE_ENDPOINT None If using Azure Azure OpenAI endpoint
BASE_URL None No Endpoint to connect to (if required)
ENABLE_SELF_HEALING True No Enable or disable SelfhealingAgents
USE_LLM_FOR_LOCATOR_GENERATION True No If True, LLM generates locator suggestions directly (see note below)
MAX_RETRIES 3 No Number of self-healing attempts per locator
REQUEST_LIMIT 5 No Internal agent-level limit for valid LLM response attempts
TOTAL_TOKENS_LIMIT 6000 No Maximum input tokens per LLM request
ORCHESTRATOR_AGENT_PROVIDER 'openai' No Provider for the orchestrator agent ("openai", "azure" or "litellm")
ORCHESTRATOR_AGENT_MODEL 'gpt-4o-mini' No Model for the orchestrator agent
ORCHESTRATOR_AGENT_TEMPERATURE 0.1 No Orchestrator model temperature.
LOCATOR_AGENT_PROVIDER 'openai' No Provider for the locator agent ("openai", "azure" or "litellm")
LOCATOR_AGENT_MODEL 'gpt-4o-mini' No Model for the locator agent
LOCATOR_AGENT_TEMPERATURE 0.1 No Locator model temperature.
LOCATOR_TYPE 'css' No Restricts the locator suggestions of the agent to the given type
REPORT_DIRECTORY cwd No Full path for output files.
IS_RERUN_ACTIVATED False No Set to True if Rerun of failed tests is activated (affects Reporting).

Note:
Locator suggestions can be generated either by assembling strings from the DOM tree (with an LLM selecting the best option), or by having the LLM generate suggestions directly itself with the context given (DOM included). Set USE_LLM_FOR_LOCATOR_GENERATION to True to enable direct LLM generation (default is True).

🔮 Outlook

While SelfhealingAgents currently focuses on healing broken locators, its architecture is designed for much more. The introduced multi-agent system provides a modular and extensible foundation for integration of additional agents, each specialized in healing different types of test failures.

Upcoming releases will expand beyond locator healing, allowing for the multi-agent framework to automatically repair a broader range of common test errors, making your Robot Framework suites even more resilient with minimal manual intervention. So stay tuned!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

robotframework_selfhealing_agents-0.1.7.tar.gz (58.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file robotframework_selfhealing_agents-0.1.7.tar.gz.

File metadata

File hashes

Hashes for robotframework_selfhealing_agents-0.1.7.tar.gz
Algorithm Hash digest
SHA256 4d935186134c7b4df8e222ec9e401db70d0721fcc4abca4663a5cd09a36ded7f
MD5 b4979b25bfc66f83cf42e492ecae88dd
BLAKE2b-256 d4c469b6359970dd6db92fdf16a799e7d2fb7fa570b7a520c9ed445cfbd08e95

See more details on using hashes here.

File details

Details for the file robotframework_selfhealing_agents-0.1.7-py3-none-any.whl.

File metadata

File hashes

Hashes for robotframework_selfhealing_agents-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 338b55c1cc261d76c4c361d4e53148c8768070a08ee388d524a7a12618dcebd4
MD5 1796a0a6f7e020a85a4ce670f3e24200
BLAKE2b-256 c59459dcea057537c96e4497ce8009d4e8e27e6b7052a7cccbd3b0c207fb10df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page