Skip to main content

AI-powered testing swarm

Project description

AI Testing Swarm

AI Testing Swarm is a super-advanced, mutation-driven API testing framework (with optional OpenAPI + OpenAI augmentation) built on top of pytest.

It generates a large set of deterministic negative/edge/security test cases for an API request, executes them (optionally in parallel, with retries/throttling), and produces a report (JSON/Markdown/HTML) with summaries.

Notes:

  • UI testing is not the focus of the current releases.
  • OpenAI features are optional and disabled by default.

Installation

pip install ai-testing-swarm

Optional (OpenAPI JSON schema validation for responses):

pip install "ai-testing-swarm[openapi]"

CLI entrypoint:

ai-test --help

Quick start (cURL input)

Create request.json:

{
  "curl": "curl --location https://postman-echo.com/post --header \"Content-Type: application/json\" --data \"{\\\"hello\\\":\\\"world\\\",\\\"count\\\":1}\""
}

Run:

ai-test --input request.json

Choose a report format:

ai-test --input request.json --report-format html

A report is written under:

  • ./ai_swarm_reports/<METHOD>_<endpoint>/<METHOD>_<endpoint>_<timestamp>.<json|md|html>

Reports include:

  • per-test results (including deterministic risk_score 0..100)
  • endpoint-level risk gate (PASS/WARN/BLOCK)
  • trend vs previous run for the same endpoint (risk delta + regressions)
  • summary counts by status code / failure type
  • optional AI summary (if enabled)

Input formats

1) Raw cURL

{ "curl": "curl ..." }

2) Normalized request

{
  "method": "POST",
  "url": "https://example.com/api/login",
  "headers": {"content-type": "application/json"},
  "params": {"a": "b"},
  "body": {"username": "u", "password": "p"}
}

3) OpenAPI-driven (optional)

{
  "openapi": "./openapi.json",
  "path": "/pets",
  "method": "get",
  "headers": {"accept": "application/json"},
  "path_params": {"petId": "123"},
  "query_params": {"limit": 10},
  "body": null
}
  • OpenAPI JSON works by default.
  • OpenAPI YAML requires PyYAML installed.
  • Base URL is read from spec.servers[0].url.
  • When using OpenAPI input, the swarm will also optionally validate response status codes against operation.responses.
  • If jsonschema is installed (via ai-testing-swarm[openapi]) and the response is JSON, response bodies are validated against the OpenAPI application/json schema.
  • If an OpenAPI requestBody schema exists for the operation, the planner can also generate schema-based request-body fuzz cases:
    • schema-valid edge cases (boundary values that should still validate)
    • schema-invalid variants (missing required fields, wrong types, out-of-range, etc.)
    • Controlled via CLI flags like --no-openapi-fuzz / --openapi-fuzz-max-*.
    • Override with AI_SWARM_OPENAPI_BASE_URL if your spec doesn’t include servers.

What test cases are generated?

The swarm always includes:

  • happy_path (baseline)

Then generates broad coverage across:

  • Method misuse: same path with wrong HTTP methods (GET/PUT/PATCH/DELETE etc.)
  • Headers: missing/invalid Content-Type, accept variations, and other header tampering
  • Auth (if Authorization header exists): missing/invalid token tests
  • Body/query mutations (per field):
    • missing / null / empty / whitespace
    • type probes (int/bool/float/array/object)
    • boundary inputs (very long strings, huge ints, negative values)
    • unicode + special character payloads
  • Security payload probes (per field): SQLi/XSS/path traversal/log4j patterns
  • Whole-body mutations: null body, empty object, extra unexpected field

Output is deterministic unless OpenAI augmentation is enabled.


Auth matrix runner (multiple tokens/headers)

To run the same request under multiple auth contexts (e.g., user/admin tokens), create auth_matrix.yaml:

cases:
  - name: user
    headers:
      Authorization: "Bearer USER_TOKEN"
  - name: admin
    headers:
      Authorization: "Bearer ADMIN_TOKEN"

Run:

ai-test --input request.json --auth-matrix auth_matrix.yaml

Each auth case is written as a separate report using a run_label suffix (e.g. __auth-user).

Safety mode (recommended for CI/demos)

Mutation testing can be noisy and may accidentally stress a real environment. To force safe demo runs only against public test hosts:

ai-test --input request.json --public-only

Or via env:

export AI_SWARM_PUBLIC_ONLY=1

Allowed hosts in public-only mode:

  • httpbin.org
  • postman-echo.com
  • reqres.in

Performance features

Parallel execution

  • Enabled by default via thread pool.
  • Control with:
    • AI_SWARM_WORKERS (default: 5)

Retry + backoff (flaky endpoints)

  • Retries on transient errors and status codes (408/429/5xx etc.)
  • Control with:
    • AI_SWARM_RETRY_COUNT (default: 1)
    • AI_SWARM_RETRY_BACKOFF_MS (default: 250)

Throttling (RPS)

  • Global throttle to avoid hammering a target:
    • AI_SWARM_RPS (default: 0 = disabled)

Max test cap

  • Avoids accidental DoS / CI timeouts:
    • AI_SWARM_MAX_TESTS (default: 80)

Reporting

Reports include:

  • summary.counts_by_failure_type
  • summary.counts_by_status_code
  • summary.slow_tests (based on SLA)
  • meta.endpoint_risk_score + meta.gate_status
  • trend.* (previous comparison if a prior report exists)

A static dashboard index is generated at:

  • ./ai_swarm_reports/index.html (latest JSON report per endpoint, sorted by regressions/risk)

Regression gate (CI)

To fail CI when the run regresses vs the previous JSON report for the same endpoint:

ai-test --input request.json --fail-on-regression

This checks report.trend.regression_count and exits with a non-zero code if regressions are detected.

SLA threshold:

  • AI_SWARM_SLA_MS (default: 2000)

Security:

  • Sensitive headers are redacted in the report (Authorization/Cookie/api tokens etc.)

Optional OpenAI augmentation (advanced)

A) Generate additional test cases (planner augmentation)

Enable:

export AI_SWARM_USE_OPENAI=1
export OPENAI_API_KEY=... 
export AI_SWARM_MAX_AI_TESTS=30

B) Human-readable AI summary in report

Enable:

export AI_SWARM_USE_OPENAI=1
export AI_SWARM_AI_SUMMARY=1
export OPENAI_API_KEY=...

Model selection:

  • AI_SWARM_OPENAI_MODEL (default: gpt-4.1-mini)

CLI help

ai-test --help

Release decisions

The swarm produces a release decision:

  • APPROVE_RELEASE
  • APPROVE_RELEASE_WITH_RISKS
  • REJECT_RELEASE

The decision is derived from deterministic rules (not an LLM).


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_testing_swarm-0.1.17.tar.gz (41.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_testing_swarm-0.1.17-py3-none-any.whl (44.2 kB view details)

Uploaded Python 3

File details

Details for the file ai_testing_swarm-0.1.17.tar.gz.

File metadata

  • Download URL: ai_testing_swarm-0.1.17.tar.gz
  • Upload date:
  • Size: 41.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for ai_testing_swarm-0.1.17.tar.gz
Algorithm Hash digest
SHA256 2dbde71c861c9c6b8ee524c19e5868cfa772a311f8789caaa96f293182e1df4d
MD5 6af2c90a4c627f5c1cdbbbd5f5ab8c9d
BLAKE2b-256 d996ff517396ddb8f62a24df5f17d353a7554e76af294849a3a2d9046ef1f4d8

See more details on using hashes here.

File details

Details for the file ai_testing_swarm-0.1.17-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_testing_swarm-0.1.17-py3-none-any.whl
Algorithm Hash digest
SHA256 7789377d83a9eb48b60e780939021aeb42da8401884bdb1bfcdd182d2b6f3bba
MD5 94ba84140fbeb14d70ea99d313f28752
BLAKE2b-256 338eea1e3c16f2147120a5e89927fb8b1417a05e5e735ec4cd11c7fe282a9ec5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page