Skip to main content

Agent Memory Relay — persistent long-term memory for AI agents

Project description

MrMemory — Agent Memory Relay

PyPI License Docs

Persistent long-term memory for AI agents. One line to install, three lines to integrate.

Docs · API Reference · Website

Install

pip install mrmemory

# With LangChain/LangGraph support:
pip install mrmemory[langchain]

Quickstart

from amr import AMR

amr = AMR("amr_sk_...")  # or set AMR_API_KEY env var

# Store a memory
amr.remember("User prefers dark mode and vim keybindings")

# Semantic recall
memories = amr.recall("What are the user's preferences?")
for m in memories:
    print(m.content, m.score)

# Forget a memory
amr.forget(memories[0].id)

LLM Auto-Remember

Extract memories from conversations automatically using GPT-4o-mini:

# Extract and store memories from a conversation
result = amr.auto_remember([
    {"role": "user", "content": "I love hiking and my favorite language is Rust"},
    {"role": "assistant", "content": "Great choices!"},
], sync=True)

print(result)  # {"extracted": 2, "created": 2, "duplicates_skipped": 0, ...}

Supports async mode (fire-and-forget), deduplication, and BYOK (bring your own OpenAI key).

Memory Compression

Compress related memories into denser representations:

# Compress memories in a namespace (dry run first)
result = amr.compress(namespace="default", sync=True, dry_run=True)
print(f"Would compress {result['groups_compressed']} groups")

# Actually compress
result = amr.compress(namespace="default", sync=True)
print(f"Reduced {result['before_count']}{result['after_count']} memories")

Self-Edit Tools

Update, bulk-delete, and merge memories — giving agents full control over their own memory:

# Update an existing memory (re-embeds automatically)
amr.update("mem_abc123", content="Updated: user now prefers light mode", tags=["preference"])

# Bulk delete old memories (dry run first)
result = amr.delete_outdated(older_than_seconds=86400 * 30, dry_run=True)
print(f"Would delete {result['deleted']} memories")

# Actually delete
amr.delete_outdated(older_than_seconds=86400 * 30, tags=["ephemeral"])

# Merge related memories into one (LLM summarization)
merged = amr.merge(["mem_abc123", "mem_def456", "mem_ghi789"])
print(merged.content)  # Summarized by GPT-4o-mini

# Or provide your own merged content
merged = amr.merge(["mem_abc123", "mem_def456"], content="User prefers dark mode and vim keybindings")

Merged memories get is_compressed: true and merged_from tracking in the response.

LangChain / LangGraph Integration

Drop-in checkpointer and store for LangGraph:

from mrmemory.langchain import MrMemoryCheckpointer, MrMemoryStore
from langgraph.graph import StateGraph

checkpointer = MrMemoryCheckpointer(api_key="amr_sk_...")
store = MrMemoryStore(api_key="amr_sk_...")

graph = StateGraph(...).compile(checkpointer=checkpointer, store=store)

Async Support

from amr import AsyncAMR

async with AsyncAMR("amr_sk_...") as amr:
    await amr.remember("User prefers dark mode")
    memories = await amr.recall("What does the user prefer?")

Configuration

amr = AMR(
    api_key="amr_sk_...",       # or set AMR_API_KEY env var
    agent_id="my-assistant",    # default agent ID
    namespace="default",        # default namespace
    timeout=10.0,               # seconds
    max_retries=3,              # retry on transient failures
)

API Endpoints

All requests go to https://amr-memory-api.fly.dev.

Method Endpoint Description
POST /v1/remember Store a memory
POST /v1/recall Semantic search
DELETE /v1/forget/:id Delete a memory
GET /v1/memories List all memories
POST /v1/memories/auto LLM auto-remember from conversations
POST /v1/memories/compress Compress related memories
PATCH /v1/memories/:id Update a memory
DELETE /v1/memories/outdated Bulk delete by age/tags
POST /v1/memories/merge Merge memories into one
GET /v1/ws WebSocket real-time events

Auth: Authorization: Bearer amr_sk_...

Pricing

Starts at $5/mo — 10K memories, 50K API calls. Sign up →

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mrmemory-0.4.1.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mrmemory-0.4.1-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file mrmemory-0.4.1.tar.gz.

File metadata

  • Download URL: mrmemory-0.4.1.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for mrmemory-0.4.1.tar.gz
Algorithm Hash digest
SHA256 581099e5135006813dbc84e77732f447668bee4e26b4eea4f6cd056c5a94f4f6
MD5 8b048b5176a098405ff449e54e887317
BLAKE2b-256 86c2ed210d66bdb08474acd103fd7c1f8ef356eef1ba439514ce0ae8fa054fb1

See more details on using hashes here.

File details

Details for the file mrmemory-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: mrmemory-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 14.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for mrmemory-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 244b8a115ee0bd4f9a40f7804aeea6b39b8a8406b42bb71e5281e00ef5947302
MD5 cbe5b884100a20949cf92d78bb153d1b
BLAKE2b-256 0bddb31cceda0f69ce6afae2199143f388ca757ca7c2cbeb50580f28933b48ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page