Skip to main content

AI-powered testing swarm

Project description

AI Testing Swarm

AI Testing Swarm is a super-advanced, mutation-driven API testing framework (with optional OpenAPI + OpenAI augmentation) built on top of pytest.

It generates a large set of deterministic negative/edge/security test cases for an API request, executes them (optionally in parallel, with retries/throttling), and produces a report (JSON/Markdown/HTML) with summaries.

Notes:

  • UI testing is not the focus of the current releases.
  • OpenAI features are optional and disabled by default.

Installation

pip install ai-testing-swarm

Optional (OpenAPI JSON schema validation for responses):

pip install "ai-testing-swarm[openapi]"

CLI entrypoint:

ai-test --help

Quick start (cURL input)

Create request.json:

{
  "curl": "curl --location https://postman-echo.com/post --header \"Content-Type: application/json\" --data \"{\\\"hello\\\":\\\"world\\\",\\\"count\\\":1}\""
}

Run:

ai-test --input request.json

Choose a report format:

ai-test --input request.json --report-format html

A report is written under:

  • ./ai_swarm_reports/<METHOD>_<endpoint>/<METHOD>_<endpoint>_<timestamp>.<json|md|html>

Reports include:

  • per-test results (including deterministic risk_score 0..100)
  • endpoint-level risk gate (PASS/WARN/BLOCK)
  • trend vs previous run for the same endpoint (risk delta + regressions)
  • summary counts by status code / failure type
  • optional AI summary (if enabled)

Input formats

1) Raw cURL

{ "curl": "curl ..." }

2) Normalized request

{
  "method": "POST",
  "url": "https://example.com/api/login",
  "headers": {"content-type": "application/json"},
  "params": {"a": "b"},
  "body": {"username": "u", "password": "p"}
}

3) OpenAPI-driven (optional)

{
  "openapi": "./openapi.json",
  "path": "/pets",
  "method": "get",
  "headers": {"accept": "application/json"},
  "path_params": {"petId": "123"},
  "query_params": {"limit": 10},
  "body": null
}
  • OpenAPI JSON works by default.
  • OpenAPI YAML requires PyYAML installed.
  • Base URL is read from spec.servers[0].url.
  • When using OpenAPI input, the swarm will also optionally validate response status codes against operation.responses.
  • If jsonschema is installed (via ai-testing-swarm[openapi]) and the response is JSON, response bodies are validated against the OpenAPI application/json schema.
    • Override with AI_SWARM_OPENAPI_BASE_URL if your spec doesn’t include servers.

What test cases are generated?

The swarm always includes:

  • happy_path (baseline)

Then generates broad coverage across:

  • Method misuse: same path with wrong HTTP methods (GET/PUT/PATCH/DELETE etc.)
  • Headers: missing/invalid Content-Type, accept variations, and other header tampering
  • Auth (if Authorization header exists): missing/invalid token tests
  • Body/query mutations (per field):
    • missing / null / empty / whitespace
    • type probes (int/bool/float/array/object)
    • boundary inputs (very long strings, huge ints, negative values)
    • unicode + special character payloads
  • Security payload probes (per field): SQLi/XSS/path traversal/log4j patterns
  • Whole-body mutations: null body, empty object, extra unexpected field

Output is deterministic unless OpenAI augmentation is enabled.


Auth matrix runner (multiple tokens/headers)

To run the same request under multiple auth contexts (e.g., user/admin tokens), create auth_matrix.yaml:

cases:
  - name: user
    headers:
      Authorization: "Bearer USER_TOKEN"
  - name: admin
    headers:
      Authorization: "Bearer ADMIN_TOKEN"

Run:

ai-test --input request.json --auth-matrix auth_matrix.yaml

Each auth case is written as a separate report using a run_label suffix (e.g. __auth-user).

Safety mode (recommended for CI/demos)

Mutation testing can be noisy and may accidentally stress a real environment. To force safe demo runs only against public test hosts:

ai-test --input request.json --public-only

Or via env:

export AI_SWARM_PUBLIC_ONLY=1

Allowed hosts in public-only mode:

  • httpbin.org
  • postman-echo.com
  • reqres.in

Performance features

Parallel execution

  • Enabled by default via thread pool.
  • Control with:
    • AI_SWARM_WORKERS (default: 5)

Retry + backoff (flaky endpoints)

  • Retries on transient errors and status codes (408/429/5xx etc.)
  • Control with:
    • AI_SWARM_RETRY_COUNT (default: 1)
    • AI_SWARM_RETRY_BACKOFF_MS (default: 250)

Throttling (RPS)

  • Global throttle to avoid hammering a target:
    • AI_SWARM_RPS (default: 0 = disabled)

Max test cap

  • Avoids accidental DoS / CI timeouts:
    • AI_SWARM_MAX_TESTS (default: 80)

Reporting

Reports include:

  • summary.counts_by_failure_type
  • summary.counts_by_status_code
  • summary.slow_tests (based on SLA)
  • meta.endpoint_risk_score + meta.gate_status
  • trend.* (previous comparison if a prior report exists)

A static dashboard index is generated at:

  • ./ai_swarm_reports/index.html (latest JSON report per endpoint, sorted by regressions/risk)

SLA threshold:

  • AI_SWARM_SLA_MS (default: 2000)

Security:

  • Sensitive headers are redacted in the report (Authorization/Cookie/api tokens etc.)

Optional OpenAI augmentation (advanced)

A) Generate additional test cases (planner augmentation)

Enable:

export AI_SWARM_USE_OPENAI=1
export OPENAI_API_KEY=... 
export AI_SWARM_MAX_AI_TESTS=30

B) Human-readable AI summary in report

Enable:

export AI_SWARM_USE_OPENAI=1
export AI_SWARM_AI_SUMMARY=1
export OPENAI_API_KEY=...

Model selection:

  • AI_SWARM_OPENAI_MODEL (default: gpt-4.1-mini)

CLI help

ai-test --help

Release decisions

The swarm produces a release decision:

  • APPROVE_RELEASE
  • APPROVE_RELEASE_WITH_RISKS
  • REJECT_RELEASE

The decision is derived from deterministic rules (not an LLM).


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_testing_swarm-0.1.16.tar.gz (35.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_testing_swarm-0.1.16-py3-none-any.whl (39.1 kB view details)

Uploaded Python 3

File details

Details for the file ai_testing_swarm-0.1.16.tar.gz.

File metadata

  • Download URL: ai_testing_swarm-0.1.16.tar.gz
  • Upload date:
  • Size: 35.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for ai_testing_swarm-0.1.16.tar.gz
Algorithm Hash digest
SHA256 93e177824ea601317bd4a3323a480c4df19e1e0865078d39c8628e4f1a63cf64
MD5 67fd0e1706f92b5668c5ad481cdab73a
BLAKE2b-256 d4788da707c74b32a94045794f4dd22d02ca329abb4b09d31a79bdd4aebaf00a

See more details on using hashes here.

File details

Details for the file ai_testing_swarm-0.1.16-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_testing_swarm-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 833d94cc4380d78809e63c6a3d2ad4c8e0e0ed2cfc529066bf52caaa60b1abbd
MD5 bd92f271f56c75d89608d636ec1d79ba
BLAKE2b-256 a8c85f0f4c1b43830ebf9528ef09288de4712fd1d57301249cd451d96c1868d7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page