Skip to main content

Institutional memory for AI agent teams: multi-model, cross-project, token-budgeted.

Project description

Memee

tests license python pypi

One lesson. Every agent. Every team. Every model.

Your agents forget. Every session, every project, every vendor swap: gone. Memee writes it down once. Claude proves it. GPT confirms it. Tomorrow Gemini opens a different repo on a different team and starts already knowing.

pipx install memee         # recommended for a CLI tool
# already installed? pipx upgrade memee

For teams and companies. This OSS release is single-user and self-hosted. If you want the same memory shared across a whole team — with cross-developer, cross-agent, cross-project, cross-model canon building into company-wide institutional knowledge — there's a paid Team edition at memee.eu. Same engine, plus SSO, audit log, and shared scope. Flat $49 / month for up to fifteen seats, or $12k / year Enterprise with SOC 2 and air-gap.


What Memee actually does

Three jobs. Executed relentlessly.

Records.

Every pattern, every decision, every near-miss. One turn at a time, across every agent on every project.

7-task A/B: time −71 %, iterations −65 %, mistakes 0.

Routes.

Not a dump. A briefing. At task start, the router picks the 5–7 memories the agent actually needs — inside a hard 500-token budget. Your CLAUDE.md grows forever. Memee doesn't.

Measured ~40 tokens per task against a ~2,160-token median baseline.

Scores.

A lesson earns trust by surviving. A second model family agrees: confidence ×1.3. A second project re-uses it: ×1.5. Earn both and it climbs the ladder — hypothesis, tested, validated, canon.

One canon. Four model families. Seventeen engines.


Install and first use — sixty seconds

pipx install memee
memee setup

# Record something you just learned.
memee record pattern "retry with jitter" \
  --tags reliability,http \
  -c "Exponential backoff, capped at 30s, idempotent verbs only."

# Find it back.
memee search "retry"

# Wire Claude Code / MCP, run a health check.
memee doctor

That's it. Memory lives in ~/.memee/memee.db. No account. Core read/write is fully local. Vector embeddings are optional — on by default via sentence-transformers, which fetches a ~80 MB model on first use. Set TRANSFORMERS_OFFLINE=1 to skip.


The architecture, on one page

Small engines on SQLite + FTS5 + a 384-dim embedding space.

Layer Job
Router Task-aware briefing. Budget-capped.
Quality gate Validates, deduplicates, rates every incoming memory before it earns a row.
Confidence scoring Adaptive. Cross-project ×1.5. Cross-model ×1.3. Both stacked ×1.95.
Lifecycle hypothesis → tested → validated → canon → deprecated. Old advice ages out. Good advice gets promoted.
Dream mode Nightly. Connects related memories, surfaces contradictions, elevates canon.
Propagation A validated pattern auto-pushes to projects with matching stack or tags. Fix once. Benefit everywhere.
Review git diff | memee review - scans a changeset against known anti-patterns. Institutional memory enters code review.
CMAM bridge Push canon to Anthropic's Managed Agents Memory at /mnt/memory/. Claude sees canon on turn one — no MCP round-trip.

Deeper notes: CLAUDE.md. CMAM spec: docs/cmam.md. Review engine: docs/review-fixes.md.


The token math

Numbers below are internal simulations and measured benchmarks, not independent third-party evaluations. Treat them as suggestive, not conclusive.

The thing Memee saves isn't the first page. It's the slope.

  • Without Memee, median: ~2,160 tokens per turn. That's a CLAUDE.md / AGENTS.md across 27 popular OSS repos (langchain, vercel/ai, prisma, zed, openai/codex, and others), sampled via gh api. Claude Code and Cursor load it in full on every session.
  • Without Memee, grown teams: 6k–15k. p95 of the sample hits 9,600. One published outlier reached 42,000.
  • With Memee: 500-token cap, measured average ~40 tokens per briefing (min 18, max 67 across 10 task queries on a 500-pattern corpus).
  • So the saving, honestly: ≥77 % at median. ≥95 % at 10k-grown teams. ≥99 % at the 42k outlier. And unlike CLAUDE.md, it's bounded. Your library grows. Per-turn context doesn't.

Reproduce locally:

memee benchmark          # OrgMemEval v1.0
pytest tests/ -v         # full suite

Full methodology + per-repo file sizes: docs/benchmarks.md.


Benchmarks

  • OrgMemEval v1.0: 92.4 / 100 across propagation, avoidance, maturity, onboarding, recovery, calibration, synthesis, research. Competitors on the same scenarios: MemPalace 0.9, Letta 1.3, Zep 2.3, Mem0 3.5 (the closest).
  • 7-task A/B (with / without Memee): time −71 %, iterations −65 %, quality 56 % → 93 %, ROI ≈ 10.7× at the $49 / month Team tier.
  • GigaCorp simulation, 100 projects, 100 agents, 18 months: incidents 12/mo → 3/mo, annual ROI ≈ 3× at the same flat Team tier.
  • Retrieval: hit@1 = 100 % on a 12-memory routing benchmark after the recent ranking fix.

Using it with Claude, GPT, Gemini

An MCP server with 24 tools ships with the install. Drop this into ~/.claude/settings.json — or the Cursor / Continue / any MCP-capable client equivalent:

{
  "mcpServers": {
    "memee": { "command": "memee", "args": ["serve"] }
  }
}

Memee auto-detects the caller's model family from MEMEE_MODEL, ANTHROPIC_MODEL, or OPENAI_MODEL and tags every write with source_model. That's how confidence scoring knows when Claude and Gemini agree — and when they don't.

Quick CLI tour:

memee brief --task "write unit tests"   # PUSH: routed briefing
memee check "about to add eval() here"  # PULL: anti-pattern check
memee propagate                         # cross-project diffusion
memee dream                             # nightly: connect, contradict, promote
memee cmam sync                         # push canon to /mnt/memory/ for Claude

Pricing

Flat per team. Same engine in every tier.

Free Team Enterprise
$0 forever · MIT $49 / month flat — up to 15 seats, annual from $12k / year — unlimited seats
For Solo developers. Self-hosted. Full engine, local scope. Teams that want shared memory, SSO, and an audit trail. Regulated industries, air-gap, SOC 2.
Stack Router, quality gate, dream mode, CMAM sync, all 4 model families Everything in Free + team/org scope with promotion workflows, SSO (SAML / OIDC), RBAC, audit log export, Postgres / Turso backend, multi-agent dashboard, 24h SLA Everything in Team + SOC 2 Type II, DPA, SCIM, on-prem license key, dedicated CSM, 4h SLA, custom MCP integrations

Between fifteen and a hundred seats, and no SOC 2 needed? Email info@memee.eu for a custom Growth plan.

Memee is memory, not model. Value scales sublinearly with headcount — one canon serves the whole team — so pricing is flat, not per-seat.


Contributing

PRs welcome. Before opening a large one, a short issue describing the direction saves everyone a round-trip.

pip install -e ".[dev]"
pytest tests/ -v

Style: type hints, English docstrings, 100-char lines, ruff clean. New engines live in src/memee/. Every new behaviour wants a test in tests/.


License

Memee core is MIT. The optional memee-team package is proprietary, distributed under a separate commercial EULA. See memee.eu for the terms.


Built by people who stopped teaching the same lesson to every new agent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memee-1.1.0.tar.gz (303.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memee-1.1.0-py3-none-any.whl (151.8 kB view details)

Uploaded Python 3

File details

Details for the file memee-1.1.0.tar.gz.

File metadata

  • Download URL: memee-1.1.0.tar.gz
  • Upload date:
  • Size: 303.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for memee-1.1.0.tar.gz
Algorithm Hash digest
SHA256 15ab610f0907654c89795463a6af98834ff63f466d3fcf51b8115995c9a3d613
MD5 77f4eb459ffeebbf96368fee4d5fe0c2
BLAKE2b-256 40bee356ec48129633fa6bb74606b15779e0c143a218158b3b88513539d7bb7e

See more details on using hashes here.

File details

Details for the file memee-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: memee-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 151.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for memee-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1550581b9b92f8b8aeac8f0417454e967a35f8f25795eb5017d611d7d09bfa27
MD5 78946b27517641deb15af3085f6241e3
BLAKE2b-256 d1d5480967c65b93bb677cc20d5e3fed25a77387381fbfc0e5b41fc61ed40f24

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page