The AI memory that argues with itself. Local-first. Model-agnostic. Every answer with a receipt.
Project description
Vera
AI with clerk discipline.
Reads instructions literally. Verifies names before writing them into documents. Tags claims by confidence so you see guesses on sight. Corrections stick. Silent failures surface. Audits itself on command.
Works in Claude, ChatGPT, Gemini, any AI chat. No install to try.
Try it in 30 seconds → · On your phone →
What a real audit looks like
One run on a payments-service conversation (full artifacts):
What the user didn't push back on but should have
The user said "400ms of the p99 is JSON serialisation of a 50 kB payload" and neither party questioned how unusual that number is. 400 ms to serialize 50 kB of JSON in Python stdlib is extremely slow — suspiciously so. That's roughly 0.1 MB/s serialization throughput. Even stdlib
jsonshould handle 50 kB in single-digit milliseconds on modern hardware. Vera should have flagged it as implausible. This could indicate a measurement error, repeated serialization, or a much larger effective payload.
The first model didn't catch the implausible number. The auditor caught it, worked out the implied throughput, and pointed at three plausible root causes. That's the point — a second pass, different loyalty.
What changes
Ask a regular AI a hard factual question: one paragraph of smooth prose, every claim evenly confident. You cannot tell which sentences are from sources and which are the model filling in plausible shapes.
Ask Vera the same question: every factual line tagged. [CITED: FOMC minutes Oct 2024] on what it actually sourced. [INFERRED low] on what it is extrapolating. [ASSUMED] on what it guessed. You see which lines to trust on sight.
Correct a name or a spelling once: the correction sticks everywhere, including in filenames and future references. Ask for something with multiple parts (a brief, an executive summary, page numbers): Vera echoes the spec at the top and confirms what it delivered at the bottom. Negations in your instructions ("did not", "without") stay negated. Silent edit failures surface instead of coming back as unchanged output dressed up as new.
Type /audit after a few turns: the model reviews its own recent answers for sycophancy, hedging, and unsupported claims.
Why this exists
- OpenAI rolled back a ChatGPT update in 2025 for being too sycophantic.
- Lawyers have been fined for fake AI citations. Anthropic's own lawyer had to apologise for a Claude hallucination in an Anthropic filing.
- A KPMG study found 57% of employees hide their AI use from managers, partly from fear of being caught with hallucinated output.
- Research cited in Fortune shows AI validates users 49% more often than humans do.
Vera does not solve hallucination. It makes it visible, so you know which lines need checking. The cost of getting burned is real and recurring. The cost of Vera is thirty seconds of pasting.
Honest about the limits
- The source inside a
[CITED: ...]tag can itself be fabricated. Verify high-stakes citations yourself. - Enforcement in the paste-in version is self-policed, not blocking. The model can drift, especially on long chats. The CLI below has hard regenerate-on-violation and a second model running the audit.
- Nothing leaves your existing AI chat. The prompt is extra instructions for the model you are already using.
Keep it on permanently
To avoid re-pasting every chat, put the prompt into Custom Instructions for a Claude Project, a ChatGPT GPT, or a Gemini Gem. Every new chat in that container inherits it.
CLI install
For users who want hard rule-blocking, a second independent auditor model, and markdown memory on disk.
pipx install git+https://github.com/iamitp/vera
export ANTHROPIC_API_KEY=sk-ant-... # or OPENAI_API_KEY
vera init && vera chat
Or the one-liner:
curl -sSf https://raw.githubusercontent.com/iamitp/vera/main/install.sh | bash
vera init # create ~/vera/ with starter rules
vera chat # interactive chat; rules enforced, captures written to memory
vera audit # second model audits recent transcripts, writes findings
vera audit --share # also emits an anonymized, copy-pasteable snippet
vera rules # print your active rules
vera status # show paths + counts
See what the audit catches in practice: examples/ (didactic) or examples/live-demo/ (a real run against the Anthropic API, with a genuine methodological catch).
MIT licensed. Python 3.10+. Works with any LLM API key you have.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vera_clerk-0.1.0.tar.gz.
File metadata
- Download URL: vera_clerk-0.1.0.tar.gz
- Upload date:
- Size: 15.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c0490afaf351d66b680fb4f087301833e812dc7e9f7dc457d1b53ba5681d97d7
|
|
| MD5 |
18c8d6c7aabd20b835edf6cdc89489f1
|
|
| BLAKE2b-256 |
c517edb6691b8f0f2e4721b9666110c60a83fd5c04f71c1af40914d44194e46e
|
File details
Details for the file vera_clerk-0.1.0-py3-none-any.whl.
File metadata
- Download URL: vera_clerk-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e432aa4e92c9418c2439d537f81615556d5810d181269eae9da92d0d4c6421e
|
|
| MD5 |
ae47a199aac44d0344cf3730d98c3873
|
|
| BLAKE2b-256 |
c97e0302a1041f01d6be9f0e56b13105da2da602934e191fe5c42e9375ba9bb9
|