Skip to main content

Cryptographic audit infrastructure for AI inference

Project description

model-witness

Model Witness provides cryptographic audit infrastructure for AI inference. Every prompt and response is hashed, signed, and anchored to an immutable ledger — giving you a tamper-evident chain of evidence for every AI call your application makes.

This package is currently in pre-alpha. Full SDK documentation and installation instructions will be published with the v0.1.0 release.

For early access enquiries contact neilwl022@gmail.com.


Accessing the record ID

Every inference call produces a record_id you can store alongside your own application data to link back to the audit trail.

Pattern A — last_record_id() (simple, single-threaded)

from model_witness import ModelWitness
import openai

client = ModelWitness(openai.OpenAI())

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_input}]
)

record_id = client.last_record_id()
db.save({"application_id": app_id, "mw_record_id": record_id})

The record_id is set synchronously before create() returns — there is no added latency to the LLM call.

Warning: last_record_id() is not thread-safe for concurrent inference calls. If you are making multiple simultaneous LLM calls, use the on_record callback instead — each invocation receives its own record with the correct record_id isolated to that call.

Pattern B — on_record callback (concurrent usage)

from model_witness import ModelWitness
import openai

def handle_record(record):
    db.save({"application_id": app_id, "mw_record_id": record.record_id})

client = ModelWitness(
    openai.OpenAI(),
    on_record=handle_record,
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_input}]
)

The on_record callback is called synchronously on the same thread as the LLM response, before the record is shipped to the server. It receives the full InferenceRecord object including record_id, input_hash, output_hash, and token counts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_witness-0.4.0.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_witness-0.4.0-py3-none-any.whl (24.5 kB view details)

Uploaded Python 3

File details

Details for the file model_witness-0.4.0.tar.gz.

File metadata

  • Download URL: model_witness-0.4.0.tar.gz
  • Upload date:
  • Size: 34.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for model_witness-0.4.0.tar.gz
Algorithm Hash digest
SHA256 9f8c32ff4c78908930cbc88ac2aa9a6d20e50a911d6be25ef5b02b88398b7c22
MD5 48403dcd4f6a7929998b1e5573131e97
BLAKE2b-256 5a7df69b92155c9c3fa6cdd16131c04f9ee85f442bc913060a0cb4c1ed6130dd

See more details on using hashes here.

File details

Details for the file model_witness-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: model_witness-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 24.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for model_witness-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 53430b64d121021a1cb7201e7a4c69815d96615afda3431cc351b996f1baa183
MD5 1687d1213118d9c574f183b2715af761
BLAKE2b-256 a4eeba3b768b6aeb5e85ffc19569f3c75968f0c45276f8dbcc00ce0e6b5e4035

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page