Skip to main content

Autotester - E2E testing for your codebase

Project description

Autotester

GitHub license Discord Documentation GitHub

Autotester is an open-source testing automation tool to make E2E automated.

  • 🤖 Run end-to-end tests using natural language descriptions
  • 🐛 Detect potential bugs and provide detailed fix explanations
  • Reduce testing overhead while improving code quality

Currently supporting Python and TypeScript, with more languages coming soon.

Quickstart

Install the package

pip install autotester

Add a config file to your project called autotester.yml. This tells Autotester what to test and how.

e2e:
  login-test: # Name of the test. You can add more
    url: "yourwebsite.com" # Starting URL of your app. It can be a local server or a remote server
    steps:
      - Login with Github
      - Go to the team page
      - Change the team name to "e2e"
      - Click on the "Save" button
      - Check that the team name is "e2e" # use words like "Check that" to assert the results of the test

If your environment is protected by HTTP Basic Auth, add an auth block:

e2e:
  auth:
    type: basic
    username: "dev"
    password: "dev123"
  login-test:
    url: "https://staging.example.com"
    steps:
      - Check the homepage loads

You can also provide auth credentials via environment variables (these take precedence over YAML):

export AUTOTESTER_AUTH_USERNAME="dev"
export AUTOTESTER_AUTH_PASSWORD="dev123"

Posthog Session Replay (Optional)

If your website uses Posthog with session replay enabled, Autotester can capture a recording of each failed test and include a direct link in the report. This makes it easy for QA engineers to watch exactly what happened during a failure.

Add a posthog block to your autotester.yml:

e2e:
  posthog:
    project_id: "12345" # Your Posthog project ID (found in Project Settings)
    host: "https://us.posthog.com" # Optional, defaults to https://us.posthog.com
  login-test:
    url: "localhost:3000"
    steps:
      - Login with Github
      - Check that the dashboard loads

Then set the POSTHOG_PERSONAL_API_KEY environment variable. The key needs the session_recording:read and sharing_configuration:write scopes. You can create one in your Posthog personal API keys settings.

export POSTHOG_PERSONAL_API_KEY="phx_your_personal_api_key"

When a test fails, the report will include a link to the Posthog recording:

login-test: Failed!
  Comment: The dashboard did not load after login
  Recording: https://us.posthog.com/shared/abc123token

How it works: When Autotester's browser navigates your website, the Posthog JS SDK (already running in the page) records the session automatically. After each test, Autotester reads the session ID from the page via posthog.getSessionId(). If the test failed, it calls the Posthog API to enable sharing for that recording and includes the resulting link in the report. No extra code or instrumentation is needed on your website beyond having Posthog installed.

Self-hosted / EU Cloud: Set host to your Posthog instance URL (e.g. https://eu.posthog.com or https://posthog.yourcompany.com).

Finding your project ID: Go to your Posthog project settings -- the project ID is shown at the top of the page.

Base URL (Optional)

If you use the same autotester.yml across different environments (e.g. staging, production) that share the same relative paths but have different hostnames, you can set a base URL. Test URLs that are relative (no http:// or https:// scheme) will be combined with the base URL automatically. Absolute test URLs are always used as-is.

Add a base_url to your autotester.yml:

e2e:
  base_url: "https://staging.example.com"
  login-test:
    url: "/login"       # resolved to https://staging.example.com/login
    steps:
      - Check the login page loads
  dashboard-test:
    url: "/dashboard"   # resolved to https://staging.example.com/dashboard
    steps:
      - Check the dashboard loads

You can also (or instead) set the base URL via an environment variable, which takes precedence over the YAML value:

export AUTOTESTER_BASE_URL="https://production.example.com"

This makes it easy to reuse the same config file across environments:

# staging
AUTOTESTER_BASE_URL="https://staging.example.com" autotester

# production
AUTOTESTER_BASE_URL="https://production.example.com" autotester

Tests with absolute URLs (e.g. url: "https://other-service.com/health") are never modified, regardless of the base URL setting.

Step Limits & Timeouts (Optional)

By default, Autotester automatically calculates a sensible step limit and wall-clock timeout for each test based on its number of steps. This prevents the browser agent from running indefinitely if it gets stuck in a loop.

The defaults are:

  • max_steps: number_of_steps * 5 (minimum 20) -- the maximum number of agent steps (LLM calls) allowed per test
  • timeout: number_of_steps * 60 (minimum 180) -- the maximum wall-clock time in seconds per test

For example, a test with 6 steps gets a budget of 30 agent steps and a timeout of 360 seconds (6 minutes).

You can override these globally or per test in your autotester.yml:

e2e:
  max_steps: 40       # global default for all tests
  timeout: 300        # global timeout in seconds
  login-test:
    url: "localhost:3000"
    max_steps: 25     # override for this test only
    timeout: 180      # override for this test only
    steps:
      - Login with Github
      - Check that the dashboard loads

When a test hits either limit, it is marked as failed with a descriptive message (e.g. "Test timed out after 300s").


That's it. To run it, you need to have an OpenAI API key and Chrome/Chromium installed.

If you don't already have Chrome installed, you can use the browser-use CLI to install Chromium:

browser-use install

Then export your OpenAI key and run:

export OPENAI_API_KEY="your-openai-api-key"
autotester

You will get a summary report like the following:

🖥️ 1/1 E2E tests

login-test: Success!

GitHub Action

Autotester can be used in a GitHub Action to run E2E tests after you release a new version.

Check out the action's README for more information, but here's a quick example:

name: Run Autotester

on:
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  after-deployment:
    runs-on: ubuntu-latest
    steps:
      - uses: cyberwave-os/autotester-action@v0.1.0
        with:
          action-type: "e2e"
          # Optional: use a custom config file (defaults to autotester.yml)
          # config-file: "tests/e2e.yml"
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          STARTING_URL: "http://yourstaging.yourwebsite.com"
          # Optional: override base URL for this environment
          # AUTOTESTER_BASE_URL: "https://staging.yourwebsite.com"
          # Optional: config file via env var (alternative to config-file input)
          # AUTOTESTER_CONFIG: "tests/e2e.yml"
          # Optional: for Basic Auth protected environments
          # AUTOTESTER_AUTH_USERNAME: ${{ secrets.AUTOTESTER_AUTH_USERNAME }}
          # AUTOTESTER_AUTH_PASSWORD: ${{ secrets.AUTOTESTER_AUTH_PASSWORD }}

With Posthog Session Replay

If your website uses Posthog, you can get recording links for failed tests. Add a posthog block to your autotester.yml (see the Posthog section above) and pass the API key:

name: Autotester E2E with Posthog Replay

on:
  pull_request:
    branches: [main]

jobs:
  e2e-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Run Autotester E2E Tests
        uses: cyberwave-os/autotester-action@v0.1.0
        with:
          action-type: "e2e"
          verbose: "true"
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          POSTHOG_PERSONAL_API_KEY: ${{ secrets.POSTHOG_PERSONAL_API_KEY }}

The recording URL appears in the console output and is also available in .autotester/e2e.json (as recording_url on each test) for use in downstream steps like Slack notifications or PR comments.

CLI Reference

Commands

  • autotester: Without any command, runs E2E tests if defined in the config file
  • autotester e2e: Runs end-to-end tests defined in the config file

Command Options

Global Options

  • --config: Path to the YAML configuration file. Resolution order: CLI flag > AUTOTESTER_CONFIG env var > autotester.yml
  • -v, --verbose: Enable verbose logging output
  • --version: Display Autotester version number

E2E Test Command

autotester e2e [--config <config_file>] [--verbose]
autotester --config my-tests.yml
  • --config: (Optional) Path to the YAML configuration file (defaults to autotester.yml)
  • -v, --verbose: (Optional) Enable verbose logging output

Environment Variables

  • OPENAI_API_KEY: (Required) Your OpenAI API key
  • AUTOTESTER_CONFIG: Path to the YAML configuration file (overridden by --config CLI flag, defaults to autotester.yml)
  • CHROME_INSTANCE_PATH: Path to your Chrome instance. Defaults to /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
  • AUTOTESTER_BASE_URL: Base URL to combine with relative test URLs (overrides base_url in YAML)
  • AUTOTESTER_AUTH_USERNAME: Username for HTTP Basic Auth (overrides auth.username in YAML)
  • AUTOTESTER_AUTH_PASSWORD: Password for HTTP Basic Auth (overrides auth.password in YAML)
  • POSTHOG_PERSONAL_API_KEY: Personal API key for Posthog session replay integration (optional, requires session_recording:read and sharing_configuration:write scopes)

Run Tests with Docker

For contributors who want a reproducible test environment, you can run the test suite in Docker.

From the repository root:

make test-docker

This command mounts your local project into the container and runs pytest. If you want to build manually first:

make test-docker-build
docker compose -f tests/docker-compose.yml run --rm test

Roadmap

Credits

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autotester-0.1.2.tar.gz (41.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autotester-0.1.2-py3-none-any.whl (48.6 kB view details)

Uploaded Python 3

File details

Details for the file autotester-0.1.2.tar.gz.

File metadata

  • Download URL: autotester-0.1.2.tar.gz
  • Upload date:
  • Size: 41.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for autotester-0.1.2.tar.gz
Algorithm Hash digest
SHA256 10a2d01052dcfd20e0e4638eaa6086fbbf4f98221315687bf82ea2ab2e286538
MD5 0ca9d2846957df7af7b78a5885b2f73c
BLAKE2b-256 ba6b734b67acd6e59c45e460a32506d93ccb6ac7ad83555f3e62f8ac2c1c0737

See more details on using hashes here.

Provenance

The following attestation bundles were made for autotester-0.1.2.tar.gz:

Publisher: release.yml on cyberwave-os/autotester

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file autotester-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: autotester-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 48.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for autotester-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cc2142fd881ee206a5f4d8b620fd4c869e0f5a4cc2781b8cdd6f78cd434a1651
MD5 f36b76972142f551c627e9b9c292a09d
BLAKE2b-256 721e1fafdc46ea1b8c83c08eeb37f426eaf512eb008923731349783dbdf12249

See more details on using hashes here.

Provenance

The following attestation bundles were made for autotester-0.1.2-py3-none-any.whl:

Publisher: release.yml on cyberwave-os/autotester

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page