Skip to main content

Decision memory for AI coding agents — semantic search across Claude Code, Gemini CLI, and Codex CLI conversations

Project description

WWT — One Brain for All Your AI Agents

Your agents share one brain. Stop re-explaining and writing so many .md files.

PyPI version License: Apache-2.0 Python 3.10+

Supported: Claude Code · Gemini CLI · Codex CLI 한국어 README


What it does

Three coding agents. Three log formats. Three sets of memory that vanish at session end. WWT collapses them into one searchable brain every agent can read from.

Claude Code ─┐
Gemini CLI  ─┼──→  one local index  ──→  any agent can recall
Codex CLI   ─┘

No more re-explaining context. No more CLAUDE.md graveyards. No more "wait, why did I choose Postgres again?"

Quick Start

pip install whatwasthat              # or: uv tool install whatwasthat
wwt setup                            # DB + hooks + MCP for every installed agent

That's it. Existing logs are auto-ingested. Future sessions auto-capture on session end.

How it works

When a session ends, the agent's hook fires. WWT parses the log, extracts code, chunks the conversation, embeds the search text locally (no API), stores the search index in ChromaDB, and preserves full raw spans in SQLite.

When you ask "how did I do X last time?" — any agent calls search_memory over MCP, gets a compact preview, and can expand the exact chunk with recall_chunk. Including the why, not just the what.

session ends → hook → parse → chunk → embed → ChromaDB + raw SQLite
question     → MCP  → search → score → preview → optional full recall

Upgrading to v1.0.12

v1.0.12 changes the storage shape to preserve full raw conversations and code snippets. Reingest once after upgrading:

wwt reset --force
wwt setup

Why one brain matters

Without WWT With WWT
Each agent forgets after every session Permanent memory across all agents
You re-explain context every session Agent recalls the why automatically
.md files pile up unread Conversations themselves are the source of truth
Claude can't see what Gemini did yesterday Any agent reads any other's history

Search modes

MCP tool When the agent calls it
search_memory "How did I configure Redis last time?"
search_decision "Why Redis instead of Memcached?"
search_all Cross-project, cross-agent recall
recall_chunk Expand a search result's chunk_id into full raw text and code snippets

search_memory auto-routes — if your project filter returns nothing useful, it expands to all projects automatically (Self-ROUTE, EMNLP 2024). One call, no retries.

Three ways to recall

1. Cross-platformClaude reads what Codex did yesterday

You (in Claude Code):  "How did I set up the JWT refresh token last night?"
WWT:                   Found in [codex-cli] backend-api @ 2026-04-07 23:40
                       → Claude reads the original Codex conversation and answers.

2. Cross-projectReuse a fix from another project

You (in project frontend):  "How did I solve that mTLS cert chain in another project?"
WWT:                        Found in [claude-code] infra-gateway (main) @ 2026-03-22
                            → Same fix, different repo. Recalled in seconds.

3. Both at onceCross-platform AND cross-project

You (in project ml-pipeline, Gemini CLI):  "Why did we drop Kafka for NATS last month?"
WWT search_decision:                       Found in [claude-code] data-platform @ 2026-03-15
                                           → Decision made by Claude in another project,
                                             now answerable from Gemini in this project.

Memory that strengthens itself

Inspired by human spaced repetition: chunks you retrieve often decay slower. Decisions you actually re-use stay sharp; one-off chats fade.

On top of that, scoring is 3-axis (Generative Agents, Stanford 2023):

final = relevance × (recency + importance)

Old critical decisions beat recent chatter. Because that's how memory should work.

Install

pip install whatwasthat              # pip
uv tool install whatwasthat          # uv (recommended)

Then run wwt setup once. It registers the MCP server and installs the auto-capture hook for every agent already on your machine — Claude Code, Gemini CLI, Codex CLI. Re-runnable, idempotent.

Requirements

  • Python 3.10+
  • OS macOS, Linux (Windows untested)
  • Disk ~200MB install + ~470MB embedding model
  • Network 100% local after model download. No API keys. No telemetry.

Documentation

Contributing

uv run pytest tests/ -v
uv run ruff check src/

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whatwasthat-1.1.0.tar.gz (61.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whatwasthat-1.1.0-py3-none-any.whl (44.5 kB view details)

Uploaded Python 3

File details

Details for the file whatwasthat-1.1.0.tar.gz.

File metadata

  • Download URL: whatwasthat-1.1.0.tar.gz
  • Upload date:
  • Size: 61.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for whatwasthat-1.1.0.tar.gz
Algorithm Hash digest
SHA256 6d9c053379cfd7c3c49fa5adf687b1f51b9deb049ff417b936d180cd6d5f6780
MD5 6b539c8cd2bb53ffd5411c72186cb9b0
BLAKE2b-256 c67af8caf5c16bd69df77580defc8d9fe01d1de59c92dd998426cfeaa287ceaf

See more details on using hashes here.

Provenance

The following attestation bundles were made for whatwasthat-1.1.0.tar.gz:

Publisher: publish.yml on Hyuk0816/whatwasthat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file whatwasthat-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: whatwasthat-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 44.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for whatwasthat-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f77da995412223e937bed09f3e198f4279d90c572556efeae8cdf17765fc3c57
MD5 3b4ca5588abd1ea65e52ddee3549c5bf
BLAKE2b-256 637c990ec0d3714909fe7a73431ab87f03080b55ec3c959a60bd007963743fbb

See more details on using hashes here.

Provenance

The following attestation bundles were made for whatwasthat-1.1.0-py3-none-any.whl:

Publisher: publish.yml on Hyuk0816/whatwasthat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page