Skip to main content

The AI memory that argues with itself. Local-first. Model-agnostic. Every answer with a receipt.

Project description

Vera

CI PyPI License: MIT

AI with clerk discipline.

Reads instructions literally. Verifies names before writing them into documents. Tags claims by confidence so you see guesses on sight. Corrections stick. Silent failures surface. Audits itself on command.

Works in Claude, ChatGPT, Gemini, any AI chat. No install to try.

Try it in 30 seconds → · On your phone →


What a real audit looks like

One run on a payments-service conversation (full artifacts):

What the user didn't push back on but should have

The user said "400ms of the p99 is JSON serialisation of a 50 kB payload" and neither party questioned how unusual that number is. 400 ms to serialize 50 kB of JSON in Python stdlib is extremely slow — suspiciously so. That's roughly 0.1 MB/s serialization throughput. Even stdlib json should handle 50 kB in single-digit milliseconds on modern hardware. Vera should have flagged it as implausible. This could indicate a measurement error, repeated serialization, or a much larger effective payload.

The first model didn't catch the implausible number. The auditor caught it, worked out the implied throughput, and pointed at three plausible root causes. That's the point — a second pass, different loyalty.


What changes

Ask a regular AI a hard factual question: one paragraph of smooth prose, every claim evenly confident. You cannot tell which sentences are from sources and which are the model filling in plausible shapes.

Ask Vera the same question: every factual line tagged. [CITED: FOMC minutes Oct 2024] on what it actually sourced. [INFERRED low] on what it is extrapolating. [ASSUMED] on what it guessed. You see which lines to trust on sight.

Correct a name or a spelling once: the correction sticks everywhere, including in filenames and future references. Ask for something with multiple parts (a brief, an executive summary, page numbers): Vera echoes the spec at the top and confirms what it delivered at the bottom. Negations in your instructions ("did not", "without") stay negated. Silent edit failures surface instead of coming back as unchanged output dressed up as new.

Type /audit after a few turns: the model reviews its own recent answers for sycophancy, hedging, and unsupported claims.

Why this exists

  • OpenAI rolled back a ChatGPT update in 2025 for being too sycophantic.
  • Lawyers have been fined for fake AI citations. Anthropic's own lawyer had to apologise for a Claude hallucination in an Anthropic filing.
  • A KPMG study found 57% of employees hide their AI use from managers, partly from fear of being caught with hallucinated output.
  • Research cited in Fortune shows AI validates users 49% more often than humans do.

Vera does not solve hallucination. It makes it visible, so you know which lines need checking. The cost of getting burned is real and recurring. The cost of Vera is thirty seconds of pasting.

Honest about the limits

  • The source inside a [CITED: ...] tag can itself be fabricated. Verify high-stakes citations yourself.
  • Enforcement in the paste-in version is self-policed, not blocking. The model can drift, especially on long chats. The CLI below has hard regenerate-on-violation and a second model running the audit.
  • Nothing leaves your existing AI chat. The prompt is extra instructions for the model you are already using.

Keep it on permanently

To avoid re-pasting every chat, put the prompt into Custom Instructions for a Claude Project, a ChatGPT GPT, or a Gemini Gem. Every new chat in that container inherits it.

CLI install

For users who want hard rule-blocking, a second independent auditor model, and markdown memory on disk.

pipx install git+https://github.com/iamitp/vera
export ANTHROPIC_API_KEY=sk-ant-...   # or OPENAI_API_KEY
vera init && vera chat

Or the one-liner:

curl -sSf https://raw.githubusercontent.com/iamitp/vera/main/install.sh | bash
vera init             # create ~/vera/ with starter rules
vera chat             # interactive chat; rules enforced, captures written to memory
vera audit            # second model audits recent transcripts, writes findings
vera audit --share    # also emits an anonymized, copy-pasteable snippet
vera rules            # print your active rules
vera status           # show paths + counts

See what the audit catches in practice: examples/ (didactic) or examples/live-demo/ (a real run against the Anthropic API, with a genuine methodological catch).

MIT licensed. Python 3.10+. Works with any LLM API key you have.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vera_clerk-0.1.0.tar.gz (15.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vera_clerk-0.1.0-py3-none-any.whl (13.0 kB view details)

Uploaded Python 3

File details

Details for the file vera_clerk-0.1.0.tar.gz.

File metadata

  • Download URL: vera_clerk-0.1.0.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for vera_clerk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c0490afaf351d66b680fb4f087301833e812dc7e9f7dc457d1b53ba5681d97d7
MD5 18c8d6c7aabd20b835edf6cdc89489f1
BLAKE2b-256 c517edb6691b8f0f2e4721b9666110c60a83fd5c04f71c1af40914d44194e46e

See more details on using hashes here.

File details

Details for the file vera_clerk-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: vera_clerk-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for vera_clerk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e432aa4e92c9418c2439d537f81615556d5810d181269eae9da92d0d4c6421e
MD5 ae47a199aac44d0344cf3730d98c3873
BLAKE2b-256 c97e0302a1041f01d6be9f0e56b13105da2da602934e191fe5c42e9375ba9bb9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page