Skip to main content

Universal memory layer for AI applications. Self-host in 5 minutes.

Project description

Remembra - AI Memory Layer

Persistent memory for AI applications. Self-host in 5 minutes.

PyPI License: MIT Python 3.11+

What Is This?

Remembra is a universal memory layer for LLMs. It solves the fundamental problem that every AI forgets everything between sessions.

from remembra import Memory

memory = Memory(user_id="user_123")

# Store memories
memory.store("User prefers dark mode and works at Acme Corp")

# Recall with context
result = memory.recall("What are user's preferences?")
print(result.context)
# → "User prefers dark mode. Works at Acme Corp."

MCP Server (Claude Code / Cursor)

Remembra ships with a built-in Model Context Protocol server. Any MCP-compatible AI assistant (Claude Code, Claude Desktop, Cursor, etc.) can use it as persistent memory.

Setup

pip install remembra[mcp]

Add to your Claude Code config:

claude mcp add remembra \
  -e REMEMBRA_URL=http://localhost:8787 \
  -e REMEMBRA_API_KEY=your_key \
  -- remembra-mcp

Or add manually to .mcp.json in your project:

{
  "mcpServers": {
    "remembra": {
      "command": "remembra-mcp",
      "env": {
        "REMEMBRA_URL": "http://localhost:8787",
        "REMEMBRA_API_KEY": "your_key"
      }
    }
  }
}

MCP Tools

Tool Description
store_memory Save facts, decisions, context to persistent memory
recall_memories Hybrid search (semantic + keyword) across all memories
forget_memories GDPR-compliant deletion by ID, entity, or all
health_check Verify server connection and health

MCP Resources

Resource Description
memory://recent Last 10 stored memories
memory://status Server status and config

Environment Variables

Variable Default Description
REMEMBRA_URL http://localhost:8787 Remembra server URL
REMEMBRA_API_KEY API key for authentication
REMEMBRA_USER_ID default User ID for memory isolation
REMEMBRA_PROJECT default Project namespace
REMEMBRA_MCP_TRANSPORT stdio Transport: stdio, sse, or streamable-http

Why We're Building This

The Problem

Every AI app needs memory. Developers hack together solutions using vector databases, embeddings, and custom retrieval logic. It's complex, fragmented, and everyone rebuilds the same thing.

Current Solutions Fall Short

  • Mem0: Pricing jumps from $19 to $249, self-hosting is complex
  • Zep: Academic, complex to deploy
  • Letta: Not production-ready
  • LangChain Memory: Too basic, no persistence

Our Approach

  • Self-host in 5 minutes: One Docker command, everything bundled
  • MCP-native: Works with Claude Code and Cursor out of the box
  • Open source core: MIT license, own your data
  • Built for production: Entity resolution, temporal decay, hybrid search

Core Features

Hybrid Search

Vector (semantic) + BM25 (keyword) search combined. Finds memories even when the query doesn't match exact words.

Entity Resolution

Knows that "Adam", "Adam Smith", "Mr. Smith", and "my husband" are the same person. Automatically extracts and links people, organizations, locations, and concepts.

Temporal Awareness

Memories have time context. TTL support. Ebbinghaus-inspired decay curves. Historical ("as of") queries.

Hybrid Storage

Vector (Qdrant) + Graph (relationships) + Relational (SQLite metadata) in one system.

Observability Dashboard

See what's stored, debug retrievals, visualize entity graphs.

Quick Start

1. Start the Server

docker run -d -p 8787:8787 remembra/remembra

2. Install the SDK

Python:

pip install remembra
from remembra import Memory

memory = Memory(
    base_url="http://localhost:8787",
    user_id="user_123",
    project="my_app"
)

# Store
result = memory.store("User's name is John. He's a software engineer at Google.")
print(result.extracted_facts)
# → ["John is a software engineer at Google."]

# Recall
result = memory.recall("Who is the user?")
print(result.context)
# → "John is a software engineer at Google."

# Forget
memory.forget(memory_id=result.memories[0].id)

JavaScript/TypeScript:

npm install @remembra/client
import { Remembra } from '@remembra/client';

const memory = new Remembra({
  url: 'http://localhost:8787',
  apiKey: 'rem_xxx',
});

// Store
const stored = await memory.store('Alice is the CTO of Acme Corp');
console.log(stored.extracted_facts);

// Recall
const result = await memory.recall('Who leads Acme?');
console.log(result.context);
// → "Alice is the CTO of Acme Corp."

// Entities
const entities = await memory.listEntities({ type: 'person' });

API Reference

REST Endpoints

Method Path Description
POST /api/v1/memories Store a memory
POST /api/v1/memories/recall Search memories
GET /api/v1/memories/{id} Get specific memory
DELETE /api/v1/memories Delete memories
GET /api/v1/entities List entities
GET /api/v1/entities/{id}/relationships Get entity relationships
GET /api/v1/temporal/decay/report Memory decay report
POST /api/v1/ingest/changelog Ingest changelog
GET /health Server health check

Full API docs available at http://localhost:8787/docs (Swagger UI).

Documentation

License

MIT License - Use it however you want.


Built by DolphyTech | remembra.dev

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

remembra-0.6.7.tar.gz (97.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

remembra-0.6.7-py3-none-any.whl (109.2 kB view details)

Uploaded Python 3

File details

Details for the file remembra-0.6.7.tar.gz.

File metadata

  • Download URL: remembra-0.6.7.tar.gz
  • Upload date:
  • Size: 97.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for remembra-0.6.7.tar.gz
Algorithm Hash digest
SHA256 fcd099e1807c6c501315dbb8ae71e046e5081d1cbc7d8fd04d8b6501c74fb003
MD5 e674592875140cd8571850cf6e5081a4
BLAKE2b-256 01e935b3218380065e79abfa9562d966e973b9c2065467328eb8ab58cd34be6a

See more details on using hashes here.

File details

Details for the file remembra-0.6.7-py3-none-any.whl.

File metadata

  • Download URL: remembra-0.6.7-py3-none-any.whl
  • Upload date:
  • Size: 109.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for remembra-0.6.7-py3-none-any.whl
Algorithm Hash digest
SHA256 3eb70c9c322186e2ebc3261fcb1fe296b7d979cf3109bf6d4cf6aa7c8a69bf71
MD5 3ee52c6d434c02bae4136b0ba29bfee8
BLAKE2b-256 b48e4b90a2a838aeaf28f540b4d100120962ad3808113ca14896a28e0e7330cf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page