Cryptographic audit infrastructure for AI inference — hash, sign, and blockchain-anchor every LLM call
Project description
model-witness
Cryptographic audit infrastructure for AI inference. Every prompt and response is hashed, signed, and anchored to an immutable ledger — giving you a tamper-evident chain of evidence for every AI call your application makes.
Documentation · Keys · Pricing · Changelog
For the most current reference — including new providers, framework integrations, and API changes — see modelwitness.io/docs.
Quick start
Install
pip install model-witness
Get an API key
Sign in and go to Keys — issue a key and attach it to a ledger. Each key is scoped to one ledger.
Log your first inference
Wrap your existing AI client with ModelWitness — one line change, same API you already know.
The record is logged asynchronously in the background.
OpenAI
from model_witness import ModelWitness
import openai
client = ModelWitness(openai.OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
Anthropic
from model_witness import ModelWitness
import anthropic
client = ModelWitness(anthropic.Anthropic())
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
print(response.content[0].text)
Azure OpenAI
from model_witness import ModelWitness
from openai import AzureOpenAI
client = ModelWitness(
AzureOpenAI(
azure_endpoint="https://<resource>.openai.azure.com/",
api_version="2024-10-21",
)
)
response = client.chat.completions.create(
model="gpt-4o", # your deployment name
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
Google Gemini
from model_witness import ModelWitness
from google import genai
client = ModelWitness(genai.Client())
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Hello"
)
print(response.text)
Capture inferences
ModelWitness client
client— Anopenai.OpenAI,openai.AzureOpenAI,anthropic.Anthropic, orgoogle.genai.Clientinstance.api_key— Your MW API key. Pass directly or set theMW_API_KEYenv var.base_url— Optional. MW server URL. Defaults toMW_API_URLenv var orhttps://api.modelwitness.io.
Accessing the record ID
Every inference call produces a record_id you can store alongside your own data to link back to
the audit trail. The ID is available synchronously — no added latency to the LLM call.
Pattern A — last_record_id()
Simple. Use this when you're making one call at a time.
client = ModelWitness(openai.OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)
record_id = client.last_record_id()
db.save({"application_id": app_id, "mw_record_id": record_id})
last_record_id() is not thread-safe for concurrent calls. Use the callback below if you're
making simultaneous requests.
Pattern B — on_record callback
Use this for concurrent inference. Each invocation receives its own record, isolated to that call.
def handle_record(record):
db.save({"application_id": app_id, "mw_record_id": record.record_id})
client = ModelWitness(openai.OpenAI(), on_record=handle_record)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)
The callback receives the full InferenceRecord — record_id, model_id, input_hash,
output_hash, token counts, and latency.
Environment variables
MW_API_KEY— Your API key. Recommended over passingapi_key=directly.MW_API_URL— Server base URL. Defaults tohttps://api.modelwitness.io.MW_KEY_CACHE_TTL— Seconds a successful key validation is trusted before re-checking. Defaults to1800(30 min). Set lower for tighter revocation windows.
Agent framework support
Plan requirements and supported frameworks may change. See modelwitness.io/docs for the current list.
Model Witness integrates natively with major agent frameworks via callbacks. Every LLM call made by your agents is captured as a separate cryptographically signed record — giving you a complete audit trail of every autonomous AI decision.
Agent framework support requires a Growth or Enterprise plan. Starter plan accounts will receive a clear error at initialization time with a link to upgrade.
Install
# Install the integration you need
pip install model-witness[langchain] # LangChain and LangGraph
pip install model-witness[crewai] # CrewAI
pip install model-witness[llamaindex] # LlamaIndex
# Or install all framework integrations at once
pip install model-witness[all-frameworks]
LangChain
Attach ModelWitnessCallbackHandler to any LLM. Reads MW_API_KEY and MW_API_URL from
environment automatically.
from model_witness.callbacks.langchain import ModelWitnessCallbackHandler
from langchain_openai import ChatOpenAI
# Reads MW_API_KEY and MW_API_URL from environment automatically
handler = ModelWitnessCallbackHandler()
llm = ChatOpenAI(callbacks=[handler])
LangGraph
Uses the same handler as LangChain. Attach to the LLM before passing to your StateGraph nodes.
from model_witness.callbacks.langchain import ModelWitnessCallbackHandler
from langchain_openai import ChatOpenAI
# Reads MW_API_KEY and MW_API_URL from environment automatically
handler = ModelWitnessCallbackHandler()
# Attach to the LLM before passing to your StateGraph nodes
llm = ChatOpenAI(callbacks=[handler])
CrewAI
Instantiate ModelWitnessEventListener — self-registers on the CrewAI event bus, no Crew
changes needed.
from model_witness.callbacks.crewai import ModelWitnessEventListener
from crewai import Agent, Task, Crew
# Reads MW_API_KEY and MW_API_URL from environment automatically
# Just instantiating is enough — self-registers on the CrewAI event bus
listener = ModelWitnessEventListener()
crew = Crew(
agents=[...],
tasks=[...],
)
LlamaIndex
Attach ModelWitnessCallbackHandler via Settings.callback_manager.
from model_witness.callbacks.llamaindex import ModelWitnessCallbackHandler
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
# Reads MW_API_KEY and MW_API_URL from environment automatically
handler = ModelWitnessCallbackHandler()
Settings.callback_manager = CallbackManager([handler])
Verify records
Every record goes through three independent checks. All three can be run without relying on Model Witness infrastructure.
How verification works
Signature — Each record is signed with ECDSA (secp256k1). The verifier recomputes the canonical hash of the record fields and confirms it against the stored signature and public key.
Merkle proof — Records are leaves in a Merkle tree. The verifier walks the sibling path from the leaf hash up to the root, confirming the record is part of the committed tree.
Blockchain anchor — The Merkle root is written to a smart contract on Polygon. The verifier
queries the transaction receipt directly and confirms the root appears in an on-chain
RootAnchored event — independent of Model Witness infrastructure.
CLI
The mw command is included with the package.
mw verify — Verify a record by ID. Checks signature, Merkle proof, and confirms the anchor on Polygon.
mw verify --record 833d3199-0a16-44c4-85a2-a46bf01a7031
✓ Signature
✓ Merkle proof
✓ Blockchain anchor
└─ Polygon block #35,938,705
✓ VERIFIED
You can also verify a locally saved proof bundle without hitting the API:
mw verify --file proof.json
mw records — List recent records from your ledger.
mw records --limit 10
mw status — Show current ledger state — Merkle root, leaf count, and latest anchor transaction.
mw status
SDK
Use ModelWitnessClient for programmatic access — read records, verify, and fetch proof bundles
without wrapping inference calls.
from model_witness.client import ModelWitnessClient
client = ModelWitnessClient()
# List recent records for this ledger
records = client.get_records(limit=20)
# → list of record dicts
# Ledger status (anchor state, record count, etc.)
status = client.get_status()
# Verify a record — checks signature, Merkle proof, and blockchain anchor
result = client.verify_record("833d3199-0a16-44c4-85a2-a46bf01a7031")
result.valid # True
result.signature_valid # True
result.merkle_proof_valid # True
result.blockchain_anchor_valid # True
result.block_number # 35938705
result.anchored_at # "2026-03-28T14:22:01Z"
result.errors # []
# Fetch a raw proof bundle (signature, Merkle path, anchor details)
bundle = client.get_proof_bundle("833d3199-0a16-44c4-85a2-a46bf01a7031")
# → dict with record, merkle_path, merkle_root, anchor_tx_hash, block_number
Compliance reports
Supported compliance frameworks and report fields may change. See modelwitness.io/docs for the current list.
Generate structured audit reports directly from your signed, blockchain-anchored inference ledger. Every figure is a direct computation from verified records — no AI, no estimates. Reports are stored server-side and retrievable at any time.
Supported frameworks:
eu_ai_act— EU AI Act, Article 13 transparency and Article 9 risk managementhipaa— §164.312(b) Audit Controlsfinra— Rule 3110 Supervisory Systemssoc2— CC7 System Operations
Generate a report
Pass the framework constant, an ISO 8601 date range, and an optional title. An unrecognised
framework raises ValueError before any network call.
from model_witness.client import ModelWitnessClient
from dotenv import load_dotenv
load_dotenv()
client = ModelWitnessClient()
report = client.generate_compliance_report(
framework="eu_ai_act",
from_ts="2026-01-01T00:00:00+00:00",
to_ts="2026-03-31T23:59:59+00:00",
report_title="Q1 2026 EU AI Act Transparency Report",
)
print(report["report_id"])
print(report["summary"]["total_inferences"])
print(report["summary"]["anchor_rate_pct"])
Customer attestations
Some compliance fields — organization name, use case description, oversight policies — cannot be
derived from inference records. Supply them via customer_attestations. All five fields are
optional.
report = client.generate_compliance_report(
framework="hipaa",
from_ts="2026-01-01T00:00:00+00:00",
to_ts="2026-03-31T23:59:59+00:00",
customer_attestations={
"organization_name": "Acme Health Systems",
"prepared_by": "Jane Smith, Compliance Officer",
"use_case_description": "Clinical decision support for triage workflows",
"human_oversight_policy": "All AI outputs reviewed by licensed clinician before action",
"data_handling_practices": "PHI de-identified before prompt construction per §164.514(b)",
},
)
List and fetch saved reports
Every generated report is persisted to the ledger. Use list_compliance_reports to see what has
been generated, and get_compliance_report to retrieve the full body of any saved report.
# List report summaries for this key's ledger
reports = client.list_compliance_reports()
for r in reports:
print(r["report_id"], r["framework"], r["generated_at"])
# Fetch a specific report in full
report = client.get_compliance_report(report_id="a3f2c1d8-...")
Security and privacy
Raw prompts and responses never leave your environment. The SDK computes SHA-256 hashes locally and transmits only the hashes. Anyone with the original data can verify the record matches — no data exposure required.
Merkle roots are anchored on Polygon (PoS). Records are batched per epoch and share a single transaction, keeping anchor costs low. Polygon checkpoints to Ethereum mainnet provide long-term finality.
ECDSA signing keys are generated and held server-side. For high-assurance deployments requiring HSM or bring-your-own-key, contact admin@modelwitness.io.
Contact
- General support: admin@modelwitness.io
- Sales and enterprise: sales@modelwitness.io
- Security disclosures: security@modelwitness.io
- Privacy and data: privacy@modelwitness.io
- Legal and terms: legal@modelwitness.io
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_witness-0.4.1.tar.gz.
File metadata
- Download URL: model_witness-0.4.1.tar.gz
- Upload date:
- Size: 38.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd48a9fb436dc362438edd041843b9cacaecb7a93bc30b8015538fd45d62d747
|
|
| MD5 |
d36a2cd6bbdaaf1be11aa974e89935ce
|
|
| BLAKE2b-256 |
2bb0b815438616984290269f75db710f6fd9a35a9fbfae735cbb713a217e00d6
|
File details
Details for the file model_witness-0.4.1-py3-none-any.whl.
File metadata
- Download URL: model_witness-0.4.1-py3-none-any.whl
- Upload date:
- Size: 28.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66d9951e50b5a86a98bae05a829eea8dc1ca12a8c529ba62abd2bc86d8f22bf4
|
|
| MD5 |
fca9536691069ec8be685a0b3ee60fef
|
|
| BLAKE2b-256 |
6821f3a94a8ee81e01cfe38180d41a895e1f853d4ada80ebf606f169b16aa6b7
|