CI/CD verification pipeline for AI agent outputs — fact check, schema validation, diff verification
Project description
agent-ci-verify
CI/CD verification pipeline for AI agent outputs.
Don't trust your agent's output — verify it.
Why agent-ci-verify?
AI agents are entering production, but no one can answer "can I trust this output?"
Existing tools are all "eval libraries" — you import them and write tests yourself. That's self-review, not independent verification.
agent-ci-verify is your agent's CI/CD pipeline — plug it in, and every agent output goes through an independent verification layer before it reaches your users.
Quick Start
pip install agent-ci-verify
agent-ci ./agent-output/
agent-ci-verify v1.0.3
Output dir: ./agent-output/
Checkers: schema, fact, diff
📋 Schema Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅ │ json_valid │ ┃
┃ ✅ │ yaml_valid │ ┃
┃ ✅ │ security_scan │ No secrets detected ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
🔍 Fact Checker
┏━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ✅ │ fact:file_count │ 1 files for '*.json' ┃
┃ ✅ │ fact:content_contains│ 'success' found in result.json ┃
┗━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
╭────────────────────────────────── Verdict ────────────────────────────────╮
│ ✅ PASS │
╰───────────────────────────────────────────────────────────────────────────╯
Three Verification Layers
| Layer | What it checks | Example |
|---|---|---|
| Schema | Format, structure, security | Valid JSON? API key leaked? Required files present? |
| Fact | File existence, API reconciliation, LLM judging | Agent claimed result.json exists — does it? API returned 200? |
| Diff | Regression detection, semantic drift | Output changed vs baseline? Similarity below threshold? |
Configuration
Drop .agent-ci.yaml in your agent project root:
pipeline:
enabled_checkers: [schema, fact, diff]
fail_fast: false
schema:
security:
enabled: true
required_files:
- "output/result.json"
json_schemas:
schemas/output.schema.json: "output/**/*.json"
fact:
files:
- pattern: "output/**/*.json"
expected_count: 1
min_size_bytes: 10
content_checks:
- type: contains
value: "success"
- type: not_contains
value: "error"
api:
- endpoint: "https://api.example.com/health"
expected_status: 200
llm_judge:
- file: "output/answer.md"
rubric: "Is the answer factually correct?"
model: "deepseek-v4-flash"
diff:
baseline: "./baseline-output/"
semantic_threshold: 0.7
max_changed_files: 5
Security Scanning
Built-in patterns detect:
- AWS Access Keys (
AKIA...) - GitHub Tokens (
ghp_...) - OpenAI API Keys (
sk-proj-...) - JWT Tokens
- Private Keys (RSA, EC, DSA, OpenSSH)
- Password/Secret assignments
CI Integration
# JSON output for programmatic parsing
agent-ci --json ./output/ | jq .verdict
# "PASS"
agent-ci --json ./output/ | jq .summary
# {"total_checks": 6, "passed": 5, "warnings": 1, "failed": 0}
# .github/workflows/agent-check.yml
- name: Verify agent output
run: |
pip install agent-ci-verify
agent-ci --json ./output/ | tee result.json
Audit Reports & History
# Generate a self-contained HTML audit report
agent-ci --report ./output/
# ✅ Report saved: ./output/agent-ci-report-20260507-120000.html
# View verification history
agent-ci --history
# 📋 Verification History (42 runs)
# PASS 20260507-120000 5✅ 0⚠️ 0❌ → ./output/prod/
# REJECT 20260507-115500 2✅ 1⚠️ 2❌ → ./output/staging/
Reports are self-contained HTML with dark theme, suitable for auditors and compliance.
Plugins
Write custom checkers in any .py file:
from agent_ci.checkers import BaseChecker
from agent_ci.types import CheckResult, CheckerReport, Severity
class SizeChecker(BaseChecker):
name = "size"
async def verify(self, output_dir):
report = CheckerReport(checker_name=self.name)
total = sum(f.stat().st_size for f in output_dir.rglob("*") if f.is_file())
limit = self.config.get("size", {}).get("max_bytes", 10_000_000)
severity = Severity.FAIL if total > limit else Severity.PASS
report.checks.append(CheckResult(
checker=self.name, check_name="size_limit",
severity=severity,
message=f"Output size: {total:,} bytes (limit: {limit:,})",
))
return report
Configure in .agent-ci.yaml:
plugins:
paths:
- ./checks/
pipeline:
enabled_checkers: [schema, fact, size]
parallel: true # Run all checkers concurrently
size:
max_bytes: 5000000
Development
git clone https://github.com/Lewis-404/agent-ci-verify.git
cd agent-ci-verify
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v
If the shell is not activated, run local verification commands through ./.venv/bin/....
Service Mode (v1.0+)
Run as a persistent HTTP API for CI/CD integration:
# Install with server dependencies
pip install 'agent-ci-verify[server]'
# Start the API server
agent-ci serve
# Health check
curl http://127.0.0.1:8899/health
# {"status":"ok","version":"1.0.0"}
# Verify agent output via API
curl -X POST http://127.0.0.1:8899/verify \
-H "Content-Type: application/json" \
-d '{"output_directory": "/path/to/agent/output"}'
Customize host/port: agent-ci serve --host 0.0.0.0 --port 8080.
Design Rationale
This project started from a deep-dive report: after scanning 25+ Moltbook posts, 40+ HN discussions, and 10+ GitHub repositories, the conclusion was the same: everyone is building eval libraries, but almost no one is building verification infrastructure.
- Most competing tools follow the library pattern:
import tool → write tests → run tests - Enterprises hesitate to put agents into production not because agents are always weak, but because they cannot answer whether the output is trustworthy
- The more agents teams deploy, the more verification demand grows — this is an infrastructure opportunity
See the deep-dive report for more context.
License
MIT — see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_ci_verify-1.0.4.tar.gz.
File metadata
- Download URL: agent_ci_verify-1.0.4.tar.gz
- Upload date:
- Size: 29.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2cb13d51b42cbe6301ae75720a639ab2fe30512273e4ff0c9cfb99606b4c1d5f
|
|
| MD5 |
5282af32b4b25d9904f06ad4996d2a4c
|
|
| BLAKE2b-256 |
a2037f1d4c5d218f299327c01148c6d14f3e4215b1715c6b45a8fc6515710618
|
File details
Details for the file agent_ci_verify-1.0.4-py3-none-any.whl.
File metadata
- Download URL: agent_ci_verify-1.0.4-py3-none-any.whl
- Upload date:
- Size: 27.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87763c79f30fd5686f495a65db2f8a892b06f8e3e5caf2a8f93146b295fbb440
|
|
| MD5 |
13787e2d90cb79b8fe258045a63e09ef
|
|
| BLAKE2b-256 |
36e979665419bc9a3cbd4dce5c861a5e2e1c5ee97629818e57c046245d0c351a
|