Skip to main content

CI/CD verification pipeline for AI agent outputs — fact check, schema validation, diff verification

Project description

agent-ci-verify

CI/CD verification pipeline for AI agent outputs.
Don't trust your agent's output — verify it.

CI PyPI version Python License: MIT

中文


Why agent-ci-verify?

AI agents are entering production, but no one can answer "can I trust this output?"

Existing tools are all "eval libraries" — you import them and write tests yourself. That's self-review, not independent verification.

agent-ci-verify is your agent's CI/CD pipeline — plug it in, and every agent output goes through an independent verification layer before it reaches your users.

Quick Start

pip install agent-ci-verify
agent-ci ./agent-output/
agent-ci-verify v1.1.0
Output dir: ./agent-output/
Checkers: schema, fact, diff

                               📋 Schema Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅   │ json_valid           │                                                ┃
┃ ✅   │ yaml_valid           │                                                ┃
┃ ✅   │ security_scan        │ No secrets detected                            ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

                               🔍 Fact Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅   │ fact:file_count      │ 1 files for '*.json'                           ┃
┃ ✅   │ fact:content_contains│ 'success' found in result.json                 ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

╭────────────────────────────────── Verdict ────────────────────────────────╮
│   ✅  PASS                                                                 │
╰───────────────────────────────────────────────────────────────────────────╯

Three Verification Layers

Layer What it checks Example
Schema Format, structure, security Valid JSON? API key leaked? Required files present?
Fact File existence, API reconciliation, LLM judging Agent claimed result.json exists — does it? API returned 200?
Diff Regression detection, semantic drift Output changed vs baseline? Similarity below threshold?

Configuration

Drop .agent-ci.yaml in your agent project root:

pipeline:
  enabled_checkers: [schema, fact, diff]
  fail_fast: false

schema:
  security:
    enabled: true
  required_files:
    - "output/result.json"
  json_schemas:
    schemas/output.schema.json: "output/**/*.json"

fact:
  files:
    - pattern: "output/**/*.json"
      expected_count: 1
      min_size_bytes: 10
      content_checks:
        - type: contains
          value: "success"
        - type: not_contains
          value: "error"
  api:
    - endpoint: "https://api.example.com/health"
      expected_status: 200
  llm_judge:
    - file: "output/answer.md"
      rubric: "Is the answer factually correct?"
      model: "deepseek-v4-flash"

diff:
  baseline: "./baseline-output/"
  semantic_threshold: 0.7
  max_changed_files: 5

Security Scanning

Built-in patterns detect:

  • AWS Access Keys (AKIA...)
  • GitHub Tokens (ghp_...)
  • OpenAI API Keys (sk-proj-...)
  • JWT Tokens
  • Private Keys (RSA, EC, DSA, OpenSSH)
  • Password/Secret assignments

CI Integration

# JSON output for programmatic parsing
agent-ci --json ./output/ | jq .verdict
# "PASS"

agent-ci --json ./output/ | jq .summary
# {"total_checks": 6, "passed": 5, "warnings": 1, "failed": 0}
# .github/workflows/agent-check.yml
- name: Verify agent output
  run: |
    pip install agent-ci-verify
    agent-ci --json ./output/ | tee result.json

Audit Reports & History

# Generate a self-contained HTML audit report
agent-ci --report ./output/
# ✅ Report saved: ./output/agent-ci-report-20260507-120000.html

# View verification history
agent-ci --history
# 📋 Verification History (42 runs)
#   PASS                 20260507-120000  5✅ 0⚠️  0❌  → ./output/prod/
#   REJECT               20260507-115500  2✅ 1⚠️  2❌  → ./output/staging/

Reports are self-contained HTML with dark theme, suitable for auditors and compliance.

Plugins

Write custom checkers in any .py file:

from agent_ci.checkers import BaseChecker
from agent_ci.types import CheckResult, CheckerReport, Severity

class SizeChecker(BaseChecker):
    name = "size"

    async def verify(self, output_dir):
        report = CheckerReport(checker_name=self.name)
        total = sum(f.stat().st_size for f in output_dir.rglob("*") if f.is_file())
        limit = self.config.get("size", {}).get("max_bytes", 10_000_000)
        severity = Severity.FAIL if total > limit else Severity.PASS
        report.checks.append(CheckResult(
            checker=self.name, check_name="size_limit",
            severity=severity,
            message=f"Output size: {total:,} bytes (limit: {limit:,})",
        ))
        return report

Configure in .agent-ci.yaml:

plugins:
  paths:
    - ./checks/

pipeline:
  enabled_checkers: [schema, fact, size]
  parallel: true  # Run all checkers concurrently

size:
  max_bytes: 5000000

Docker

# Clone and build
git clone https://github.com/Lewis-404/agent-ci-verify.git
cd agent-ci-verify

# Generate API key
export AGENT_CI_API_KEY=$(openssl rand -hex 32)

# Start with docker-compose
docker compose up -d

# Verify it's running
curl http://localhost:8899/health

Or build manually:

docker build -t agent-ci-verify .
docker run -p 8899:8899 \
  -e AGENT_CI_API_KEY="$AGENT_CI_API_KEY" \
  -e AGENT_CI_ALLOWED_ROOTS="/data" \
  -v ./data:/data:ro \
  agent-ci-verify

Development

git clone https://github.com/Lewis-404/agent-ci-verify.git
cd agent-ci-verify
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v

If the shell is not activated, run local verification commands through ./.venv/bin/....

Service Mode (v1.0.5+)

Run as a persistent HTTP API for CI/CD integration. Server mode uses structured logging (structlog), falling back to standard logging if structlog is unavailable.

# Install with server dependencies
pip install 'agent-ci-verify[server]'

# Start the API server
agent-ci serve

# Health check
curl http://127.0.0.1:8899/health
# {"status":"ok","version":"1.0.5","checkers":{"schema":"healthy","fact":"healthy","diff":"healthy"}}

# Verify agent output via API (API key REQUIRED)
curl -X POST http://127.0.0.1:8899/verify \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{"output_directory": "/path/to/agent/output"}'

Note: API key authentication is REQUIRED for POST /verify. Generate a key with openssl rand -hex 32 and set the AGENT_CI_API_KEY environment variable before starting the server.

Customize host/port: agent-ci serve --host 0.0.0.0 --port 8080.

Environment Variables

Variable Default Description
AGENT_CI_API_KEY (required) API key for POST /verify authentication
AGENT_CI_RATE_LIMIT 10 Max requests per window per IP+key
AGENT_CI_RATE_WINDOW 60 Rate limit window in seconds

Design Rationale

This project started from a deep-dive report: after scanning 25+ Moltbook posts, 40+ HN discussions, and 10+ GitHub repositories, the conclusion was the same: everyone is building eval libraries, but almost no one is building verification infrastructure.

  • Most competing tools follow the library pattern: import tool → write tests → run tests
  • Enterprises hesitate to put agents into production not because agents are always weak, but because they cannot answer whether the output is trustworthy
  • The more agents teams deploy, the more verification demand grows — this is an infrastructure opportunity

See the deep-dive report for more context.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_ci_verify-1.1.0.tar.gz (44.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_ci_verify-1.1.0-py3-none-any.whl (29.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_ci_verify-1.1.0.tar.gz.

File metadata

  • Download URL: agent_ci_verify-1.1.0.tar.gz
  • Upload date:
  • Size: 44.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agent_ci_verify-1.1.0.tar.gz
Algorithm Hash digest
SHA256 6e42e09bb2d4d1833f4f8c3b4f734747f4a54141d1caaaa9e56a0d199fbd70de
MD5 bbebe26ed584f68423fbe247fbebffcd
BLAKE2b-256 7cd031eeb152ce2b76d13674cc1cff017e0033e5c8f89bc2ec1ff20605d50a94

See more details on using hashes here.

File details

Details for the file agent_ci_verify-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_ci_verify-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f93f67c452426f6038cdd9e3ac424c07bc0c35a6f15f1d153b0a200e3f4aab9c
MD5 48f4b505a6c202c115ee7a0be89f5804
BLAKE2b-256 eb4fa31ebd38bb10f07fee47298ca27e36b26c8bf6264ab3c3feb57759d345de

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page