Skip to main content

Compliance-as-code for AI systems. Audit your AI against the EU AI Act, NIST AI RMF, and 13+ regulatory frameworks using Open Policy Agent (OPA) — and produce audit-ready PDF, Markdown, JSON, or HTML reports.

Project description

AICertify — Compliance-as-code for AI systems

Audit your AI against the EU AI Act, NIST AI RMF, and 13 more frameworks — one contract, one command, one report.

PyPI CI Stars Python 3.12+ Apache 2.0 Built on OPA 94 Rego Policies

From AI app to audit-ready report: AI Application -> AICertify Contract -> OPA Policy Evaluation -> Compliance Report


📦 Full documentation, examples, contributing guide, translations (zh-CN / ja-JP / ko-KR / hi-IN), and 94 Rego policies live in the GitHub repository.

Regulators are moving faster than your governance docs. The EU AI Act is in force. NIST AI RMF is the de-facto US standard. India, Brazil, and Singapore are next. AICertify lets you encode those obligations as executable Open Policy Agent policies, run them against captured AI interactions, and produce audit-ready reports in PDF, Markdown, JSON, or HTML.

It's the missing link between "we have a responsible-AI policy" and "we can prove it."

Use it when you need to:

  • turn AI governance policies into executable checks
  • produce audit-ready compliance evidence on every release
  • evaluate AI interactions against named regulatory frameworks (EU AI Act, NIST AI RMF, FERPA, fair-lending, FAA/EASA aviation, …)
  • generate Markdown, JSON, HTML, or PDF reports your auditor can read
  • integrate AI compliance checks into CI/CD

AICertify is part of the Open Policy Agent ecosystem — built on the same policy engine that powers Kubernetes admission, microservice authorisation, and infrastructure governance at scale.

If AICertify helps you, please star the repo. It helps AI governance and policy-as-code practitioners discover the project.


Quick Start

# 1. Install AICertify (~3–5 min on first install; pulls langchain + transformers)
pip install aicertify

# 2. Install the OPA binary, one-time (~80 MB)
curl -L https://openpolicyagent.org/downloads/latest/opa_linux_amd64 -o /usr/local/bin/opa && sudo chmod +x /usr/local/bin/opa

# 3. Run the bundled demo — no contract file, no API keys, ~10 seconds
aicertify demo

aicertify demo loads a bundled sample contract, evaluates it against the EU AI Act policy set via OPA, and writes aicertify_demo_report.md to the current directory. Open the report — that's what your audit deliverable looks like.

aicertify demo recording — banner, spinners, evaluation progress, generated report path

For richer evaluations (LangFair fairness metrics, DeepEval content-safety scoring, PDF reports), see examples/quickstart.py and the forkable example bots — each ships an input_contract.json, a policy_config.yaml, and a run.py.

Minimal Python usage

from aicertify import regulations, application

# 1. Pick the regulations you want to certify against
regs = regulations.create("my_regulations")
regs.add("eu_ai_act")

# 2. Wrap your AI app
app = application.create(
    name="customer-support-bot",
    model_name="gpt-4o",
    model_version="2024-08-06",
)

# 3. Feed it real interactions
app.add_interaction(
    input_text="I want a refund for my order",
    output_text="I can help with that. Could you share your order number?",
)

# 4. Evaluate and get reports back
await app.evaluate(regulations=regs, report_format="pdf", output_dir="reports")

That's the whole loop. Contract → interactions → evaluate → report.


Why AICertify?

Most AI governance programs live in PDFs, spreadsheets, and policy documents. They describe what should happen but do not prove what did.

AICertify turns governance rules into executable policy checks.

Instead of saying:

"Our chatbot follows our responsible AI policy."

You can produce:

"Here is the captured interaction, the policy version, the OPA evaluation result, and the generated audit report."

AICertify is for AI teams, governance teams, auditors, and platform engineers who need AI compliance evidence that can be read, run, reviewed, and repeated.

See the full positioning in docs/why-aicertify.md on GitHub.


Compared with alternatives

Most AI-governance tooling is either:

  • A vendor SaaS that locks your audit trail behind a login (Credo AI, Holistic AI), or
  • A research toolkit focused on a single dimension — fairness metrics (Fairlearn, AI Fairness 360) or explainability (Microsoft RAI Toolbox).

Neither produces the document a regulator actually asks for: evidence that you tested this AI system against a named regulation, with reproducible policies and a dated report.

AICertify Fairlearn / AIF360 MS RAI Toolbox Credo AI
Open source ✅ Apache 2.0 ✅ MIT ✅ MIT ❌ Closed
On-prem / air-gapped
Named regulatory frameworks EU AI Act, NIST RMF, Brazil AI Bill, India DPDP, +11 more ❌ (fairness only) ❌ (toolkit)
Policy-as-code (auditable, diff-able) ✅ OPA / Rego
Industry verticals out of the box Aviation, Banking, Healthcare, Automotive, Education Partial
Generates audit-ready reports ✅ PDF / MD / JSON / HTML Partial
Custom policies ✅ Drop a .rego file N/A ✅ (paid)

For OPA / Rego users

If you already use OPA, AICertify gives you the AI-application context layer OPA was missing. You bring your AI app; AICertify captures the interactions, feeds them through the OPA engine against AI-specific Rego policies sourced from gopal, and emits audit-ready evidence.

The whole stack is policy-as-code — same workflow you already use for Kubernetes admission, microservice authorisation, and infrastructure governance.


Forkable examples

Copy any of these and substitute your own contract:

Each example ships an input_contract.json, policy_config.yaml, sample_interactions.json, an expected_report.md, and a run.py you can execute directly.


See the output

You don't have to install anything to see what AICertify produces. A sample pre-generated PDF is in the repo:


More on GitHub


License

Apache 2.0 — see the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aicertify-0.7.3.tar.gz (197.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aicertify-0.7.3-py3-none-any.whl (269.8 kB view details)

Uploaded Python 3

File details

Details for the file aicertify-0.7.3.tar.gz.

File metadata

  • Download URL: aicertify-0.7.3.tar.gz
  • Upload date:
  • Size: 197.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for aicertify-0.7.3.tar.gz
Algorithm Hash digest
SHA256 96116caf7c8e18f90aa4bbc0e2418848a016d2b0df7a6c614fe63565b362ef0f
MD5 9429ca7a609d23fb05e2cb2c6cb3b67e
BLAKE2b-256 f8797209541b4dcbbee988df91133a9a5ff28b5a2b20af866c07343df40a6e22

See more details on using hashes here.

File details

Details for the file aicertify-0.7.3-py3-none-any.whl.

File metadata

  • Download URL: aicertify-0.7.3-py3-none-any.whl
  • Upload date:
  • Size: 269.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for aicertify-0.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9dd54bcabddbc746b7f35096ffb423c17dd46ea07ff42540f7dbeb3d40552d9e
MD5 2d1463e4ba03e60eaed9f67e8a4e3f1b
BLAKE2b-256 18745e41c9fa566985c27ef858586ae8a326f20caac04a29a3e6a699ccee0f34

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page