Skip to main content

CI/CD verification pipeline for AI agent outputs — fact check, schema validation, diff verification

Project description

agent-ci-verify

CI/CD verification pipeline for AI agent outputs.
Don't trust your agent's output — verify it.

CI PyPI version Python License: MIT

中文


Why agent-ci-verify?

AI agents are entering production, but no one can answer "can I trust this output?"

Existing tools are all "eval libraries" — you import them and write tests yourself. That's self-review, not independent verification.

agent-ci-verify is your agent's CI/CD pipeline — plug it in, and every agent output goes through an independent verification layer before it reaches your users.

Quick Start

pip install agent-ci-verify
agent-ci ./agent-output/
agent-ci-verify v1.0.0
Output dir: ./agent-output/
Checkers: schema, fact, diff

                               📋 Schema Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅   │ json_valid           │                                                ┃
┃ ✅   │ yaml_valid           │                                                ┃
┃ ✅   │ security_scan        │ No secrets detected                            ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

                               🔍 Fact Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅   │ fact:file_count      │ 1 files for '*.json'                           ┃
┃ ✅   │ fact:content_contains│ 'success' found in result.json                 ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

╭────────────────────────────────── Verdict ────────────────────────────────╮
│   ✅  PASS                                                                 │
╰───────────────────────────────────────────────────────────────────────────╯

Three Verification Layers

Layer What it checks Example
Schema Format, structure, security Valid JSON? API key leaked? Required files present?
Fact File existence, API reconciliation, LLM judging Agent claimed result.json exists — does it? API returned 200?
Diff Regression detection, semantic drift Output changed vs baseline? Similarity below threshold?

Configuration

Drop .agent-ci.yaml in your agent project root:

pipeline:
  enabled_checkers: [schema, fact, diff]
  fail_fast: false

schema:
  security:
    enabled: true
  required_files:
    - "output/result.json"
  json_schemas:
    schemas/output.schema.json: "output/**/*.json"

fact:
  files:
    - pattern: "output/**/*.json"
      expected_count: 1
      min_size_bytes: 10
      content_checks:
        - type: contains
          value: "success"
        - type: not_contains
          value: "error"
  api:
    - endpoint: "https://api.example.com/health"
      expected_status: 200
  llm_judge:
    - file: "output/answer.md"
      rubric: "Is the answer factually correct?"
      model: "deepseek-v4-flash"

diff:
  baseline: "./baseline-output/"
  semantic_threshold: 0.7
  max_changed_files: 5

Security Scanning

Built-in patterns detect:

  • AWS Access Keys (AKIA...)
  • GitHub Tokens (ghp_...)
  • OpenAI API Keys (sk-proj-...)
  • JWT Tokens
  • Private Keys (RSA, EC, DSA, OpenSSH)
  • Password/Secret assignments

CI Integration

# JSON output for programmatic parsing
agent-ci --json ./output/ | jq .verdict
# "PASS"

agent-ci --json ./output/ | jq .summary
# {"total_checks": 6, "passed": 5, "warnings": 1, "failed": 0}
# .github/workflows/agent-check.yml
- name: Verify agent output
  run: |
    pip install agent-ci-verify
    agent-ci --json ./output/ | tee result.json

Audit Reports & History

# Generate a self-contained HTML audit report
agent-ci --report ./output/
# ✅ Report saved: ./output/agent-ci-report-20260507-120000.html

# View verification history
agent-ci --history
# 📋 Verification History (42 runs)
#   PASS                 20260507-120000  5✅ 0⚠️  0❌  → ./output/prod/
#   REJECT               20260507-115500  2✅ 1⚠️  2❌  → ./output/staging/

Reports are self-contained HTML with dark theme, suitable for auditors and compliance.

Plugins

Write custom checkers in any .py file:

from agent_ci.checkers import BaseChecker
from agent_ci.types import CheckResult, CheckerReport, Severity

class SizeChecker(BaseChecker):
    name = "size"

    async def verify(self, output_dir):
        report = CheckerReport(checker_name=self.name)
        total = sum(f.stat().st_size for f in output_dir.rglob("*") if f.is_file())
        limit = self.config.get("size", {}).get("max_bytes", 10_000_000)
        severity = Severity.FAIL if total > limit else Severity.PASS
        report.checks.append(CheckResult(
            checker=self.name, check_name="size_limit",
            severity=severity,
            message=f"Output size: {total:,} bytes (limit: {limit:,})",
        ))
        return report

Configure in .agent-ci.yaml:

plugins:
  paths:
    - ./checks/

pipeline:
  enabled_checkers: [schema, fact, size]
  parallel: true  # Run all checkers concurrently

size:
  max_bytes: 5000000

Development

git clone https://github.com/Lewis-404/agent-ci-verify.git
cd agent-ci-verify
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v

If the shell is not activated, run local verification commands through ./.venv/bin/....

Service Mode (v1.0+)

Run as a persistent HTTP API for CI/CD integration:

# Install with server dependencies
pip install 'agent-ci-verify[server]'

# Start the API server
agent-ci serve

# Health check
curl http://127.0.0.1:8899/health
# {"status":"ok","version":"1.0.0"}

# Verify agent output via API
curl -X POST http://127.0.0.1:8899/verify \
  -H "Content-Type: application/json" \
  -d '{"output_directory": "/path/to/agent/output"}'

Customize host/port: agent-ci serve --host 0.0.0.0 --port 8080.

Design Rationale

This project started from a deep-dive report: after scanning 25+ Moltbook posts, 40+ HN discussions, and 10+ GitHub repositories, the conclusion was the same: everyone is building eval libraries, but almost no one is building verification infrastructure.

  • Most competing tools follow the library pattern: import tool → write tests → run tests
  • Enterprises hesitate to put agents into production not because agents are always weak, but because they cannot answer whether the output is trustworthy
  • The more agents teams deploy, the more verification demand grows — this is an infrastructure opportunity

See the deep-dive report for more context.

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_ci_verify-1.0.1.tar.gz (28.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_ci_verify-1.0.1-py3-none-any.whl (26.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_ci_verify-1.0.1.tar.gz.

File metadata

  • Download URL: agent_ci_verify-1.0.1.tar.gz
  • Upload date:
  • Size: 28.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agent_ci_verify-1.0.1.tar.gz
Algorithm Hash digest
SHA256 74fe580877bf86129b2b56c5518c3cfe4b20fef84191d04f3a729b2457326ff9
MD5 8c17c6d098d1ec69cc39c344ad23e40d
BLAKE2b-256 7e8520ff19c788030c68b3c6bd2f532b3afd95ff6b31d61408fa0cd57491c2c7

See more details on using hashes here.

File details

Details for the file agent_ci_verify-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_ci_verify-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9b4f626e237f7be364e16db056e53e799c114da3a7d67be452067125fc37f128
MD5 1942202d891b108da03a28221f127ee3
BLAKE2b-256 7541948252a431e44f02fc821d6b7302e04eeb20f9ef18daf6f90fa431284b73

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page