Skip to main content

Decision memory for AI coding agents — semantic search across Claude Code, Gemini CLI, and Codex CLI conversations

Project description

WWT — One Brain for All Your AI Agents

Your agents share one brain. Stop re-explaining and writing so many .md files.

PyPI version License: Apache-2.0 Python 3.10+

Supported: Claude Code · Gemini CLI · Codex CLI 한국어 README


What it does

Three coding agents. Three log formats. Three sets of memory that vanish at session end. WWT collapses them into one searchable brain every agent can read from.

Claude Code ─┐
Gemini CLI  ─┼──→  one local index  ──→  any agent can recall
Codex CLI   ─┘

No more re-explaining context. No more CLAUDE.md graveyards. No more "wait, why did I choose Postgres again?"

Quick Start

pip install whatwasthat              # or: uv tool install whatwasthat
wwt setup                            # DB + hooks + MCP for every installed agent

That's it. Existing logs are auto-ingested. Future sessions auto-capture on session end.

How it works

When a session ends, the agent's hook fires. WWT parses the log, extracts code, chunks the conversation, embeds the search text locally (no API), stores the search index in ChromaDB, and preserves full raw spans in SQLite.

When you ask "how did I do X last time?" — any agent calls search_memory over MCP, gets a compact preview, and can expand the exact chunk with recall_chunk. Including the why, not just the what.

session ends → hook → parse → chunk → embed → ChromaDB + raw SQLite
question     → MCP  → search → score → preview → optional full recall

Upgrading to v1.0.12

v1.0.12 changes the storage shape to preserve full raw conversations and code snippets. Reingest once after upgrading:

wwt reset --force
wwt setup

Why one brain matters

Without WWT With WWT
Each agent forgets after every session Permanent memory across all agents
You re-explain context every session Agent recalls the why automatically
.md files pile up unread Conversations themselves are the source of truth
Claude can't see what Gemini did yesterday Any agent reads any other's history

Search modes

MCP tool When the agent calls it
search_memory "How did I configure Redis last time?"
search_decision "Why Redis instead of Memcached?"
search_all Cross-project, cross-agent recall
recall_chunk Expand a search result's chunk_id into full raw text and code snippets

search_memory auto-routes — if your project filter returns nothing useful, it expands to all projects automatically (Self-ROUTE, EMNLP 2024). One call, no retries.

Three ways to recall

1. Cross-platformClaude reads what Codex did yesterday

You (in Claude Code):  "How did I set up the JWT refresh token last night?"
WWT:                   Found in [codex-cli] backend-api @ 2026-04-07 23:40
                       → Claude reads the original Codex conversation and answers.

2. Cross-projectReuse a fix from another project

You (in project frontend):  "How did I solve that mTLS cert chain in another project?"
WWT:                        Found in [claude-code] infra-gateway (main) @ 2026-03-22
                            → Same fix, different repo. Recalled in seconds.

3. Both at onceCross-platform AND cross-project

You (in project ml-pipeline, Gemini CLI):  "Why did we drop Kafka for NATS last month?"
WWT search_decision:                       Found in [claude-code] data-platform @ 2026-03-15
                                           → Decision made by Claude in another project,
                                             now answerable from Gemini in this project.

Memory that strengthens itself

Inspired by human spaced repetition: chunks you retrieve often decay slower. Decisions you actually re-use stay sharp; one-off chats fade.

On top of that, scoring is 3-axis (Generative Agents, Stanford 2023):

final = relevance × (recency + importance)

Old critical decisions beat recent chatter. Because that's how memory should work.

Install

pip install whatwasthat              # pip
uv tool install whatwasthat          # uv (recommended)

Then run wwt setup once. It registers the MCP server and installs the auto-capture hook for every agent already on your machine — Claude Code, Gemini CLI, Codex CLI. Re-runnable, idempotent.

Requirements

  • Python 3.10+
  • OS macOS, Linux (Windows untested)
  • Disk ~200MB install + ~470MB embedding model
  • Network 100% local after model download. No API keys. No telemetry.

Documentation

Contributing

uv run pytest tests/ -v
uv run ruff check src/

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whatwasthat-1.0.12.tar.gz (58.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whatwasthat-1.0.12-py3-none-any.whl (42.4 kB view details)

Uploaded Python 3

File details

Details for the file whatwasthat-1.0.12.tar.gz.

File metadata

  • Download URL: whatwasthat-1.0.12.tar.gz
  • Upload date:
  • Size: 58.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for whatwasthat-1.0.12.tar.gz
Algorithm Hash digest
SHA256 5f2a04254d72b23d861323c182bc05fb17a7cea1a2dac1bbc5b8569036ae86ee
MD5 00111e4a73e4342859e405d5f54b6a25
BLAKE2b-256 cb6671622091e7150fc58a6b2b00cff9203f5b3da0c82079b17d560ab966193c

See more details on using hashes here.

Provenance

The following attestation bundles were made for whatwasthat-1.0.12.tar.gz:

Publisher: publish.yml on Hyuk0816/whatwasthat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file whatwasthat-1.0.12-py3-none-any.whl.

File metadata

  • Download URL: whatwasthat-1.0.12-py3-none-any.whl
  • Upload date:
  • Size: 42.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for whatwasthat-1.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 e6bfaea8058e2582b2a993f0b774d269415e9a5a33b02337b8b9bfee5924cf7f
MD5 e62c93bc932ce7605fb068b8caf8dd16
BLAKE2b-256 0a67b797b4a21be0f56f1d6b1c5a5392499d51cdb39ecf040f03af251e761de8

See more details on using hashes here.

Provenance

The following attestation bundles were made for whatwasthat-1.0.12-py3-none-any.whl:

Publisher: publish.yml on Hyuk0816/whatwasthat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page