Skip to main content

AI-powered analysis of Robot Framework test failures - instant insights from output.xml

Project description

Result Companion

PyPI version Python versions License CI

Turn your Robot Framework test failures into instant, actionable insights with AI.

Demo

Why Result Companion?

Every QA engineer knows the pain: A test fails. You dig through logs. You trace keywords. You hunt for that one error message buried in thousands of lines. Hours wasted.

Result Companion changes that. It reads your output.xml, understands the entire test flow, and tells you exactly what went wrong and how to fix it—in seconds, not hours.

What It Does

# Before: Manual debugging for hours
robot tests/                     # Test fails
# Now: Where did it fail? Why? What's the root cause?

# After: Instant AI analysis
result-companion analyze -o output.xml   # Get answers in seconds

Your enhanced log.html now includes:

  • Root Cause Analysis: Pinpoints the exact keyword and reason for failure
  • Test Flow Summary: Understand what happened at a glance
  • Actionable Fixes: Specific suggestions to resolve the issue

For CI logs and pipelines, use text output. An overall failure synthesis is run by default and added to both rc_log.html and text output. Disable with --no-overall-summary:

result-companion analyze -o output.xml --text-report rc_summary.txt
result-companion analyze -o output.xml --print-text-report
result-companion analyze -o output.xml --no-overall-summary

Copilot Review Agent

Replaces the manual "which commit broke this test?" investigation. AI cross-references Robot Framework failures with PR code changes via GitHub Copilot and posts the verdict as a PR comment:

result-companion analyze -o output.xml --json-report rc_summary.json
result-companion review -s rc_summary.json --repo owner/repo --pr 65

# Save to file for review/editing before posting
result-companion review -s rc_summary.json --repo owner/repo --pr 65 --preview -o review.md

See examples/PR_REVIEW.md for setup, flow diagram, flags, and GitHub Actions usage.

Example generated PR comment — PR #65

🔍 result-companion: Test Failure Analysis

Root cause: unclear — investigate further

  • Location: poc_pr_review.py:6 — file docstring and example usage reference interactive gh auth login which, if executed in CI without a token, can trigger GitHub 403/forbidden responses
  • Location: poc_pr_review.py:35 — the prompt/action builder constructs shell commands that would run gh pr comment without using a non-interactive token, risking authentication failures in CI

💡 Suggested Fix

Replace interactive GH auth and posting with a token-based non-interactive command:

action = (
    "Print the review comment body only — do NOT run gh pr comment."
    if preview
    else (
        f'echo "$GITHUB_TOKEN" | gh auth login --with-token && '
        f'gh pr comment {pr_number} --repo {repo_name} --body "<review text>"'
    )
)

Ensure CI provides GITHUB_TOKEN secret and keep preview=True by default in CI invocation.

Quick Start

Option 1: GitHub Copilot (Easiest for Users With Copilot)

Already have GitHub Copilot? Use it directly—no API keys needed.

pip install result-companion

# One-time setup
brew install copilot-cli   # or: npm install -g @github/copilot
copilot -i "/login"            # Login when prompted, then /exit

# Analyze your tests
result-companion analyze -o output.xml -c examples/configs/copilot_config.yaml

See Copilot setup guide.

Option 2: Local AI (Free, Private)

pip install result-companion

# Auto-setup local AI model
result-companion setup ollama
result-companion setup model deepseek-r1:1.5b

# Analyze your tests
result-companion analyze -o output.xml -c examples/configs/ollama_config.yaml

Option 3: Cloud AI (OpenAI, Azure, Google)

pip install result-companion

# Configure and run
export OPENAI_API_KEY="your-key"
result-companion analyze -o output.xml -c examples/configs/openai_config.yaml

Supports 100+ LLM providers via LiteLLM.

Real Example

Your test fails with:

Login Test Suite
└── Login With Valid Credentials [FAIL]

Result Companion tells you:

**Flow**
- Open browser to login page ✓
- Enter username "testuser" ✓
- Enter password ✓
- Click login button ✓
- Wait for dashboard [FAILED after 10s timeout]

**Failure Root Cause**
The keyword "Wait Until Page Contains Element" failed because
element 'id=dashboard' was not found. Server returned 503 error
in network logs at timestamp 14:23:45.

**Potential Fixes**
- Check if backend service is running and healthy
- Verify dashboard element selector hasn't changed
- Increase timeout if service startup is slow

Beyond Error Analysis

Customize prompts for any use case:

# security_audit.yaml
llm_config:
  question_prompt: |
    Find security issues: hardcoded passwords,
    exposed tokens, insecure configurations...
# performance_review.yaml
llm_config:
  question_prompt: |
    Identify slow operations, unnecessary waits,
    inefficient loops...

See Custom Analysis examples for security audits, performance reviews, and more. The llm_config section also supports chunking prompts for large test suites.

Configuration Examples

Check examples/configs/ for ready-to-use configs:

  • GitHub Copilot (easiest for users with copilot)
  • Local Ollama setup
  • OpenAI, Azure, Google Cloud
  • Custom endpoints (Databricks, self-hosted)
  • Prompt customization for security, performance, quality reviews

Filter Tests by Tags

Analyze only the tests you care about:

# Analyze smoke tests only
result-companion analyze -o output.xml --include "smoke*"

# Exclude work-in-progress tests
result-companion analyze -o output.xml --exclude "wip,draft*"

# Analyze critical tests (including passes)
result-companion analyze -o output.xml --include "critical*" -i

Or use config file:

test_filter:
  include_tags: ["smoke", "critical*"]
  exclude_tags: ["wip", "flaky"]
  include_passing: false  # Analyze failures only

See tag_filtering_config.yaml for details.

Limitations

  • Text-only analysis (no screenshots/videos)
  • Large test suites processed in chunks
  • Local models: Need 4-8GB RAM + GPU/NPU for good performance (Apple Silicon, NVIDIA, AMD)

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

For bugs or feature requests, open an issue on GitHub.

Development Setup

make install                # install with dev dependencies
poetry run pre-commit install  # one-time: install pre-commit hooks

make test-unit              # unit tests only
make test-integration      # integration tests (e2e skipped automatically)
make test-e2e              # e2e only (requires Copilot CLI / Ollama locally)
make test-integration-all  # all integration tests including e2e

License

Apache 2.0 - See LICENSE

Disclaimer

Cloud AI providers may process your test data. Local models (Ollama) keep everything private on your machine.

You are responsible for data privacy. The creator takes no responsibility for data exposure, intellectual property leakage, or security issues. By using Result Companion, you accept all risks and ensure compliance with your organization's data policies.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

result_companion-0.0.12.tar.gz (44.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

result_companion-0.0.12-py3-none-any.whl (54.2 kB view details)

Uploaded Python 3

File details

Details for the file result_companion-0.0.12.tar.gz.

File metadata

  • Download URL: result_companion-0.0.12.tar.gz
  • Upload date:
  • Size: 44.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.13 Linux/6.17.0-1008-azure

File hashes

Hashes for result_companion-0.0.12.tar.gz
Algorithm Hash digest
SHA256 88f5125c7a8ba3491c4ce8d315ecf88d35845bd96b6c016fd4c0bf61f88cc41c
MD5 20bf36bfebc84cc46ff2f8d45c811159
BLAKE2b-256 9c0de4fc878ed2e6e5773e1f83efd8c8d193be6890302bab2005957e05c045d9

See more details on using hashes here.

File details

Details for the file result_companion-0.0.12-py3-none-any.whl.

File metadata

  • Download URL: result_companion-0.0.12-py3-none-any.whl
  • Upload date:
  • Size: 54.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.13 Linux/6.17.0-1008-azure

File hashes

Hashes for result_companion-0.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 1c07342985cfa980793a3690faeae329291b9d3997a302ae423fb47deab0fd7b
MD5 ae9dc06a7368e23c5f8b4c0dcb62ce25
BLAKE2b-256 48a4aff6061e2829f5511538c790601ffcdfa550ca52963ce8e9fba15ad9f1d8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page