AIR Trust Layer for OpenAI Python SDK — audit trails, PII detection, injection scanning, and HMAC-SHA256 compliance chains for EU AI Act
Project description
air-openai-trust
AIR Trust Layer for OpenAI Python SDK — EU AI Act compliance infrastructure for every OpenAI API call.
Wraps your existing OpenAI client to automatically add:
- HMAC-SHA256 tamper-evident audit chains for every API call (Art. 12)
- PII detection in prompts and responses (Art. 10)
- Prompt injection scanning (Art. 15)
- Token usage and latency tracking
- Human delegation verification (Art. 14)
- Output validation for robustness (Art. 15)
Part of the AIR Blackbox ecosystem — open-source EU AI Act compliance tooling for Python AI frameworks.
Install
pip install air-openai-trust
Quick Start
Option 1: Wrap an existing client
from openai import OpenAI
from air_openai_trust import attach_trust
client = OpenAI()
client = attach_trust(client)
# Use normally — every call is now audit-logged
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
)
Option 2: Create a pre-configured client
from air_openai_trust import air_openai_client
client = air_openai_client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
)
Both approaches produce .air.json audit records in the ./runs/ directory (configurable).
What It Does
Audit Trail (Art. 12 — Record-Keeping)
Every API call generates a .air.json file with:
- Timestamp, model, provider
- Token usage (prompt, completion, total)
- Latency in milliseconds
- PII alerts (if detected)
- Injection alerts (if detected)
- HMAC-SHA256 chain hash linking to the previous record
The chain hashes create a tamper-evident sequence — modifying any record invalidates all subsequent hashes. This gives auditors cryptographic proof that logs haven't been altered.
PII Detection (Art. 10 — Data Governance)
Automatically scans prompts and responses for:
- Email addresses
- Social Security Numbers
- Phone numbers
- Credit card numbers
Detected PII is logged as alerts (with values redacted) in the audit record. Your API calls still work normally.
Prompt Injection Scanning (Art. 15 — Robustness)
Scans for common injection patterns like:
- "ignore previous instructions"
- "you are now"
- "system prompt:"
- "new instructions:"
- "override:"
Detected injections are logged as alerts in the audit record.
Output Validation
from air_openai_trust import validate_output
result = validate_output("Some LLM response text")
# Returns: {"safe": True, "pii_alerts": [], "injection_alerts": []}
Human Delegation (Art. 14 — Human Oversight)
from air_openai_trust import check_delegation
# Verify human authorization before agent actions
if check_delegation(authorized_by="jason@example.com", action="send_email"):
# proceed with action
pass
Configuration
Audit directory
# Via argument
client = attach_trust(client, runs_dir="./my-audit-logs")
# Via environment variable
# export AIR_RUNS_DIR=./my-audit-logs
client = attach_trust(client)
Signing key
# Via environment variable (recommended)
# export TRUST_SIGNING_KEY=my-secret-key
# Default key is used if not set
Non-Blocking
All audit logging is non-blocking. If logging fails for any reason, your OpenAI API calls still work normally. The trust layer never crashes your application.
Supported APIs
| API | Audit Logged |
|---|---|
chat.completions.create() |
Yes — full scanning + audit chain |
embeddings.create() |
Yes — usage tracking + audit chain |
| All other endpoints | Passed through to OpenAI client |
Audit Record Format
{
"version": "1.0.0",
"run_id": "uuid",
"timestamp": "2026-03-30T12:00:00+00:00",
"type": "llm_call",
"provider": "openai",
"model": "gpt-4o-mini",
"tokens": {
"prompt": 10,
"completion": 25,
"total": 35
},
"duration_ms": 450,
"status": "success",
"message_count": 1,
"pii_alerts": [],
"injection_alerts": [],
"chain_hash": "a1b2c3..."
}
Full Ecosystem
air-openai-trust is one of 9 PyPI packages in the AIR Blackbox ecosystem:
| Package | Purpose |
|---|---|
air-compliance |
CLI scanner — air-compliance scan . |
air-blackbox |
Governance control plane |
air-blackbox-mcp |
MCP server for AI editors |
air-blackbox-sdk |
Python SDK |
air-langchain-trust |
Trust layer for LangChain |
air-crewai-trust |
Trust layer for CrewAI |
air-anthropic-trust |
Trust layer for Anthropic Claude SDK |
air-adk-trust |
Trust layer for Google ADK |
air-openai-trust |
Trust layer for OpenAI SDK (this package) |
Links
- Website: airblackbox.ai
- GitHub: github.com/airblackbox/air-openai-trust
- Demo: airblackbox.ai/demo
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file air_openai_trust-0.1.0.tar.gz.
File metadata
- Download URL: air_openai_trust-0.1.0.tar.gz
- Upload date:
- Size: 6.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a0cefe5748871f5ee8ac4806b5eb7423f366c77774bb76612f4808dd0a1c123a
|
|
| MD5 |
81180c0918b6ae7ddacb54ff5bec371a
|
|
| BLAKE2b-256 |
923cc29f2c1cf55377cc50615a2af430382bf2b7435d92963e4d97ecbc91c4b3
|
File details
Details for the file air_openai_trust-0.1.0-py3-none-any.whl.
File metadata
- Download URL: air_openai_trust-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
012851e4e8382e7d02fac22d98996f41aaa8d832ec741f2a3fc0397db9612e80
|
|
| MD5 |
759bf7fbe3440965cd75996554fd9436
|
|
| BLAKE2b-256 |
1662fe4d0b061e8ef54604b66f7912f3faa4e4c5aa3729f6f5da1ce94c6f7c01
|