Skip to main content

Compliance-as-code for AI systems. Audit your AI against the EU AI Act, NIST AI RMF, and 13+ regulatory frameworks using Open Policy Agent (OPA) — and produce audit-ready PDF, Markdown, JSON, or HTML reports.

Project description

AICertify — Compliance-as-code for AI systems

English | 简体中文 | 日本語 | 한국어 | हिन्दी

Audit your AI against the EU AI Act, NIST AI RMF, and 13 more frameworks — one contract, one command, one report.

PyPI CI Stars Python 3.12+ Apache 2.0 Built on OPA 94 Rego Policies PRs Welcome

From AI app to audit-ready report: AI Application -> AICertify Contract -> OPA Policy Evaluation -> Compliance Report


Regulators are moving faster than your governance docs. The EU AI Act is in force. NIST AI RMF is the de-facto US standard. India, Brazil, and Singapore are next. AICertify lets you encode those obligations as executable Open Policy Agent policies, run them against captured AI interactions, and produce audit-ready reports in PDF, Markdown, JSON, or HTML.

It's the missing link between "we have a responsible-AI policy" and "we can prove it."

Use it when you need to:

  • turn AI governance policies into executable checks
  • produce audit-ready compliance evidence on every release
  • evaluate AI interactions against named regulatory frameworks (EU AI Act, NIST AI RMF, FERPA, fair-lending, FAA/EASA aviation, …)
  • generate Markdown, JSON, HTML, or PDF reports your auditor can read
  • integrate AI compliance checks into CI/CD

AICertify is part of the Open Policy Agent ecosystem — built on the same policy engine that powers Kubernetes admission, microservice authorisation, and infrastructure governance at scale.

If AICertify helps you, please star the repo. It helps AI governance and policy-as-code practitioners discover the project.


Quick Start

# 1. Install AICertify (~3–5 min on first install; pulls langchain + transformers)
pip install aicertify

# 2. Install the OPA binary, one-time (~80 MB)
curl -L https://openpolicyagent.org/downloads/latest/opa_linux_amd64 -o /usr/local/bin/opa && sudo chmod +x /usr/local/bin/opa

# 3. Run the bundled demo — no contract file, no API keys, ~10 seconds
aicertify demo

aicertify demo loads a bundled sample contract, evaluates it against the EU AI Act policy set via OPA, and writes aicertify_demo_report.md to the current directory. Open the report — that's what your audit deliverable looks like.

aicertify demo recording — banner, spinners, evaluation progress, generated report path

For richer evaluations (LangFair fairness metrics, DeepEval content-safety scoring, PDF reports), see examples/quickstart.py and the forkable example bots — each ships an input_contract.json, a policy_config.yaml, and a run.py.

For development

git clone https://github.com/Principled-Evolution/aicertify.git
cd aicertify
pip install -e .

Minimal Python usage

from aicertify import regulations, application

# 1. Pick the regulations you want to certify against
regs = regulations.create("my_regulations")
regs.add("eu_ai_act")

# 2. Wrap your AI app
app = application.create(
    name="customer-support-bot",
    model_name="gpt-4o",
    model_version="2024-08-06",
)

# 3. Feed it real interactions
app.add_interaction(
    input_text="I want a refund for my order",
    output_text="I can help with that. Could you share your order number?",
)

# 4. Evaluate and get reports back
await app.evaluate(regulations=regs, report_format="pdf", output_dir="reports")

That's the whole loop. Contract → interactions → evaluate → report.


Why AICertify

Most AI-governance tooling is either:

  • A vendor SaaS that locks your audit trail behind a login (Credo AI, Holistic AI), or
  • A research toolkit focused on a single dimension — fairness metrics (Fairlearn, AI Fairness 360) or explainability (Microsoft RAI Toolbox).

Neither produces the document a regulator actually asks for: evidence that you tested this AI system against a named regulation, with reproducible policies and a dated report.

AICertify is built for that artifact.

AICertify Fairlearn / AIF360 MS RAI Toolbox Credo AI
Open source ✅ Apache 2.0 ✅ MIT ✅ MIT ❌ Closed
On-prem / air-gapped
Named regulatory frameworks EU AI Act, NIST RMF, Brazil AI Bill, India DPDP, +11 more ❌ (fairness only) ❌ (toolkit)
Policy-as-code (auditable, diff-able) ✅ OPA / Rego
Industry verticals out of the box Aviation, Banking, Healthcare, Automotive, Education Partial
Generates audit-ready reports ✅ PDF / MD / JSON / HTML Partial
Custom policies ✅ Drop a .rego file N/A ✅ (paid)

How It Works

AICertify architecture: Your AI App feeds a Contract, which flows through Evaluators (Fairness, ContentSafety, RiskManagement, Compliance) into the OPA Engine with 94 Rego policies, producing an audit deliverable via the Report Generator

  1. Contract — A JSON description of your AI application: model, version, captured interactions, metadata.
  2. Evaluators — Pluggable Python evaluators (Fairness, ContentSafety, RiskManagement, Compliance) extract metrics from your interactions.
  3. OPA policies — The metrics get evaluated against the regulation's Rego policies (sourced from the gopal policy library).
  4. Report — A formatted, dated artifact you can hand to legal, an auditor, or your AI risk committee.

Because the policies are declarative Rego, they version, diff, and review like any other code. When a regulation changes, you bump the policy — not your evaluation harness.


Regulatory Coverage

Regulatory coverage: 94 policies across 15+ frameworks and 5 industries -- EU AI Act, NIST AI RMF, India DPDP, Brazil AI Bill, RTCA DO-365/366, FAA Part 107, EASA SORA, ICAO Doc 10019, Healthcare, Banking and Financial Services, Automotive, Education, Global, Aviation, AIOps, Corporate

AICertify runs against the gopal policy library — 94 production OPA policies across these frameworks:

International

  • EU AI Act — 29 policies covering prohibited practices, biometric ID, manipulation, transparency, technical documentation, human oversight, GPAI obligations
  • NIST AI RMF — Govern, Map, Measure, Manage + AI 600-1
  • India Digital Policy — DPDP-aligned obligations
  • Brazil AI Governance Bill — Algorithmic governance requirements
  • Aviation standards — ICAO Doc 10019, RTCA DO-365/366, ASTM F3442, ISO 21384, FAA Part 107, EASA SORA

Industry-specific

  • Aviation (17 policies) — Detect-and-avoid, certification, design, integration validation
  • Education (12 policies) — FERPA, COPPA, proctoring, human-in-the-loop grading
  • Banking & Financial Services — Model risk, fair lending
  • Healthcare — Patient safety, diagnostic safety
  • Automotive — Vehicle safety integration

Global & Operational

  • Global — Accountability, fairness, transparency, explainability, content safety, risk management, security
  • Corporate — InfoSec, governance
  • AIOps & Cost — Scalability, resource efficiency

Don't see your regulation? Add a Rego file. The library is designed to be extended.


CLI

python -m aicertify.cli \
  --contract path/to/contract.json \
  --policy aicertify/opa_policies/international/eu_ai_act/v1 \
  --report-format pdf \
  --output-dir reports/

Useful flags:

Flag Purpose
--contract Path to the AI application contract JSON
--policy Path to the OPA policy folder to evaluate against
--report-format pdf, markdown, json, html (default: pdf)
--evaluators Restrict to specific evaluators (e.g. Fairness ContentSafety)
--output-dir Where reports land (default: ./reports)
--verbose Verbose logging

See examples/quickstart.py for the full Python API.


See the output

You don't have to install anything to see what AICertify produces. Pre-generated reports are committed to the repo:

Anatomy of an audit-ready report: header with framework name, application, model and date; executive summary; policy results table; risk assessment bar chart; remediation guidance; footer attributing AICertify v0.7.0

Open the PDFs. That's what your auditor wants.


Status

AICertify is in beta (v0.7.0) — the API may evolve before the 1.0 release. Production-ready frameworks today:

  • ✅ EU AI Act
  • ✅ Global evaluators (fairness, content safety, transparency)
  • ✅ Healthcare, BFS, Automotive industry policies
  • ✅ Aviation policy set (RTCA, ASTM, FAA, EASA)
  • 🚧 NIST AI RMF — partial coverage
  • 🚧 India Digital Policy — early stage

Track progress in the policy library roadmap.


For OPA / Rego users

If you already use OPA for Kubernetes admission, microservice authorisation, or infrastructure governance, AICertify is the AI-system slot in your existing policy strategy.

  • Bring your own Rego policies. Drop a .rego file into the policy folder and it evaluates alongside the bundled set.
  • Evaluate AI interactions through OPA. Captured inputs, outputs, and metrics flow into your policies via the standard OPA input document.
  • Generate audit-ready evidence. PDF / Markdown / JSON / HTML, one command.
  • Use gopal as the policy library underneath. 94 production Rego policies covering EU AI Act, NIST AI RMF, aviation safety, FERPA, fair lending, and more.

AICertify is listed in the Open Policy Agent ecosystem as the AI-governance entry alongside Gopal.


Why AICertify?

Most AI governance programs live in PDFs, spreadsheets, and policy documents. They describe what should happen but do not prove what did.

AICertify turns governance rules into executable policy checks.

Instead of saying:

"Our chatbot follows our responsible AI policy."

You can produce:

"Here is the captured interaction, the policy version, the OPA evaluation result, and the generated audit report."

AICertify is for AI teams, governance teams, auditors, and platform engineers who need AI compliance evidence that can be read, run, reviewed, and repeated.

See the full positioning in docs/why-aicertify.md.


Who should contribute?

AICertify is especially useful for:

  • AI engineers building regulated AI systems
  • Governance, risk, and compliance (GRC) teams producing audit evidence
  • Auditors and model risk professionals evaluating third-party AI
  • OPA / Rego users interested in AI-specific policy authoring
  • Responsible AI researchers wanting reproducible benchmarks
  • Python developers interested in compliance automation

Non-code contributions are welcome: examples, policy mappings, docs, tests, report templates, and regulatory notes.

A good place to start is the good first issue and help wanted labels.


Contributing

We welcome:

  • New regulatory frameworks (open an issue first to align scope)
  • Industry-specific policies you've battle-tested
  • New evaluators (fairness, safety, robustness — see aicertify/evaluators/)
  • Bug reports with a minimal reproducing contract
  • Documentation, examples, and tutorials

Start with CONTRIBUTING.md, the Code of Conduct, and the open contributor issues.

For security issues, please follow the Security Policy — report privately to security@principledevolution.ai, not via public issue.


Related Projects

  • gopal — The OPA policy library AICertify uses under the hood. Use it standalone with the OPA CLI if you don't need the Python framework.
  • Open Policy Agent — The policy engine.
  • Regal — Rego linter used to keep policies clean.

License

Apache License 2.0 — see LICENSE.


⭐ If AICertify is useful to you, please star the repo and share it with one colleague.
Every star helps AI governance and policy-as-code practitioners discover the project.

Built by Principled Evolution · Policies you can read, run, and prove.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aicertify-0.7.2.tar.gz (200.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aicertify-0.7.2-py3-none-any.whl (271.9 kB view details)

Uploaded Python 3

File details

Details for the file aicertify-0.7.2.tar.gz.

File metadata

  • Download URL: aicertify-0.7.2.tar.gz
  • Upload date:
  • Size: 200.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for aicertify-0.7.2.tar.gz
Algorithm Hash digest
SHA256 d545a5e161c7410fde6ef0501a8ccc1b714f0a68bfc4c16f3da37c7c57cd2051
MD5 cd5e89c65be4262265a178d2d396e901
BLAKE2b-256 734f29bd733a9857d5cf27b8400ead81779017631095216f4f85702912266ab7

See more details on using hashes here.

File details

Details for the file aicertify-0.7.2-py3-none-any.whl.

File metadata

  • Download URL: aicertify-0.7.2-py3-none-any.whl
  • Upload date:
  • Size: 271.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for aicertify-0.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2492805b441c06cd7e1c545f12260438eb39e43ab8d60de17fc0c627744571d6
MD5 2651f46d664308d867c34f7c03f413b8
BLAKE2b-256 189c5d53544251b67cd2f1d150751946cf1a7baa9ddc384ce432ddd61dd518b8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page