Skip to main content

Human-like memory for AI agents — auto-save, auto-recall, cognitive profile. Claude Code hooks, MCP server (29 tools), OpenClaw plugin, semantic/episodic/procedural memory. Free open-source Mem0 alternative.

Reason this release was yanked:

Missing mcp dep, upgrade to 2.25.3

Project description

Mengram

Give your AI agents memory that actually learns

PyPI npm License: Apache 2.0 PyPI Downloads

Website · Get API Key · Docs · Console · Examples

pip install mengram-ai   # or: npm install mengram-ai
from mengram import Mengram
m = Mengram(api_key="om-...")           # Free key → mengram.io

m.add([{"role": "user", "content": "I use Python and deploy to Railway"}])
m.search("tech stack")                  # → facts
m.ask("what's my tech stack?")          # → synthesized answer + citations
m.episodes(query="deployment")          # → events
m.procedures(query="deploy")            # → workflows that evolve from failures

Native multilingual: ask in Russian, Chinese, Spanish, Japanese — Mengram retrieves and answers across 23 languages (Cohere multilingual embeddings + rerank).


Claude Code — Zero-Config Memory

Two commands. Claude Code remembers everything across sessions automatically.

pip install mengram-ai
mengram setup              # Sign up + install hooks (interactive)

Or manually: export MENGRAM_API_KEY=om-...mengram hook install

What happens:

Session Start  →  Loads your cognitive profile (who you are, preferences, tech stack)
Every Prompt   →  Searches past sessions for relevant context (auto-recall)
After Response →  Saves new knowledge in background (auto-save)

No manual saves. No tool calls. Claude just knows what you worked on yesterday.

mengram hook status     # check what's installed
mengram hook uninstall  # remove all hooks

Why Mengram?

Every AI memory tool stores facts. Mengram stores 3 types of memory — and procedures evolve when they fail.

Mengram claude-mem Mem0 Zep Letta
Semantic memory (facts, preferences) Yes Yes Yes Yes Yes
Episodic memory (events, decisions) Yes Partial No No Partial
Procedural memory (workflows) Yes No No No No
Procedures evolve from failures Yes No No No No
Cognitive Profile Yes No No No No
Native multilingual (23 languages) Yes No No No No
Ask & Citations (synthesized answer) Yes No No No No
Multi-user isolation Yes No Yes Yes No
Knowledge graph Yes No Yes Yes Yes
Claude Code hooks (auto-save/recall) Yes Yes No No No
LangChain + CrewAI + MCP Yes No Partial Partial Partial
Import ChatGPT / Obsidian Yes No No No No
Pricing Free tier Free / OSS $19-249/mo Enterprise Self-host

Get Started in 30 Seconds

1. Install

pip install mengram-ai

2. Setup (creates account + installs Claude Code hooks)

mengram setup

Or get a key manually at mengram.io and export MENGRAM_API_KEY=om-...

3. Use

from mengram import Mengram

m = Mengram(api_key="om-...")

# Add a conversation — auto-extracts facts, events, and workflows
m.add([
    {"role": "user", "content": "Deployed to Railway today. Build passed but forgot migrations — DB crashed. Fixed by adding a pre-deploy check."},
])

# Search across all 3 memory types at once
results = m.search_all("deployment issues")
# → {semantic: [...], episodic: [...], procedural: [...]}
File Upload (PDF, DOCX, TXT, MD)
# Upload a PDF — auto-extracts memories using vision AI
result = m.add_file("meeting-notes.pdf")
# → {"status": "accepted", "job_id": "job-...", "page_count": 12}

# Poll for completion
m.job_status(result["job_id"])
// Node.js — pass a file path
await m.addFile('./report.pdf');

// Browser — pass a File object from <input type="file">
await m.addFile(fileInput.files[0]);
# REST API
curl -X POST https://mengram.io/v1/add_file \
  -H "Authorization: Bearer om-..." \
  -F "file=@meeting-notes.pdf" \
  -F "user_id=default"
JavaScript / TypeScript
npm install mengram-ai
const { MengramClient } = require('mengram-ai');
const m = new MengramClient('om-...');

await m.add([{ role: 'user', content: 'Fixed OOM by adding Redis cache layer' }]);
const results = await m.searchAll('database issues');
// → { semantic: [...], episodic: [...], procedural: [...] }
REST API (curl)
# Add memory
curl -X POST https://mengram.io/v1/add \
  -H "Authorization: Bearer om-..." \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "I prefer dark mode and vim keybindings"}]}'

# Search all 3 types
curl -X POST https://mengram.io/v1/search/all \
  -H "Authorization: Bearer om-..." \
  -d '{"query": "user preferences"}'

3 Memory Types

Semantic — facts, preferences, knowledge

m.search("tech stack")
# → ["Uses Python 3.12", "Deploys to Railway", "PostgreSQL with pgvector"]

Episodic — events, decisions, outcomes

m.episodes(query="deployment")
# → [{summary: "DB crashed due to missing migrations", outcome: "resolved", date: "2025-05-12"}]

Procedural — workflows that evolve

Week 1:  "Deploy" → build → push → deploy
                                         ↓ FAILURE: forgot migrations
Week 2:  "Deploy" v2 → build → run migrations → push → deploy
                                                          ↓ FAILURE: OOM
Week 3:  "Deploy" v3 → build → run migrations → check memory → push → deploy ✅

This happens automatically when you report failures:

m.procedure_feedback(proc_id, success=False,
                     context="OOM error on step 3", failed_at_step=3)
# → Procedure evolves to v3 with new step added

Or fully automatic — just add conversations and Mengram detects failures and evolves procedures:

m.add([{"role": "user", "content": "Deploy failed again — OOM on the build step"}])
# → Episode created → linked to "Deploy" procedure → failure detected → v3 created

Ask Your Memory (RAG built-in)

m.ask() returns a synthesized answer with citations — not a raw fact list. Mengram embeds your query, retrieves the top relevant facts, and uses Cohere Chat to write a grounded answer with native source attribution.

result = m.ask("what programming languages do I use?")

print(result["answer"])
# 'You use Python and Rust. Python is your daily language [1] and
#  Rust is your favorite [2]. You also know Java for enterprise
#  systems [3].'

for cit in result["citations"]:
    print(f'  "{cit["text"]}" → {cit["sources"][0]["fact"]}')
# "Python and Rust" → uses Python daily for backend development
# "favorite [2]"   → Rust is favorite language
# "Java"           → specializes in Java/Spring Boot

Multilingual: ask in any of 23 languages, get an answer in the same language with citations linking back to facts in the original language they were stored. Premium feature (Pro / Growth / Business).

Cognitive Profile

One API call generates a system prompt from all memories:

profile = m.get_profile()
# → "You are talking to Ali, a developer in Almaty. Uses Python, PostgreSQL,
#    and Railway. Recently debugged pgvector deployment. Prefers direct
#    communication and practical next steps."

Insert into any LLM's system prompt for instant personalization.

Import Existing Data

Kill the cold-start problem:

mengram import chatgpt ~/Downloads/chatgpt-export.zip --cloud   # ChatGPT history
mengram import obsidian ~/Documents/MyVault --cloud              # Obsidian vault
mengram import files notes/*.md --cloud                          # Any text/markdown

Integrations

Claude Code — Auto-memory hooks

mengram hook install

3 hooks: profile on start, recall on every prompt, save after responses. Zero manual effort.

Docs

MCP Server — Claude Desktop, Cursor, Windsurf

{
  "mcpServers": {
    "mengram": {
      "command": "mengram",
      "args": ["server", "--cloud"],
      "env": { "MENGRAM_API_KEY": "om-..." }
    }
  }
}

29 tools for memory management.

LangChainpip install langchain-mengram

from langchain_mengram import (
    MengramRetriever,
    MengramChatMessageHistory,
)

retriever = MengramRetriever(api_key="om-...")
docs = retriever.invoke("deployment issues")

CrewAI

from integrations.crewai import create_mengram_tools

tools = create_mengram_tools(api_key="om-...")
# → 5 tools: search, remember, profile,
#   save_workflow, workflow_feedback

agent = Agent(role="Support", tools=tools)

OpenClaw

openclaw plugins install openclaw-mengram

Auto-recall before every turn, auto-capture after. 12 tools, slash commands, Graph RAG.

GitHub · npm

CLI — Full command-line interface

mengram search "deployment" --cloud
mengram profile --cloud
mengram import chatgpt export.zip --cloud
mengram hook install

Docs

Claude Managed Agents — MCP memory for hosted agents

{
  "mcp_servers": [{
    "type": "url",
    "name": "mengram",
    "url": "https://mengram.io/mcp/sse"
  }]
}

29 memory tools via MCP. Docs

n8n — HTTP nodes for any workflow

POST https://mengram.io/v1/add
POST https://mengram.io/v1/search

No code needed — drag and drop memory into any n8n workflow.

Docs

Multi-User Isolation

One API key, many users — each sees only their own data:

m.add([...], user_id="alice")
m.add([...], user_id="bob")

m.search_all("preferences", user_id="alice")  # Only Alice's memories
m.get_profile(user_id="alice")                 # Alice's cognitive profile

Async Client

Non-blocking Python client built on httpx:

from mengram import AsyncMengram

async with AsyncMengram() as m:
    await m.add([{"role": "user", "content": "I use async/await"}])
    results = await m.search("async")
    profile = await m.get_profile()

Install with pip install mengram-ai[async].

Metadata Filters

Filter search results by metadata:

results = m.search("config", filters={"agent_id": "support-bot", "app_id": "prod"})

Webhooks

Get notified when memories change:

m.create_webhook(
    url="https://your-app.com/hook",
    event_types=["memory_add", "memory_update"],
)

Agent Templates

Clone, set API key, run in 5 minutes:

Template Stack What it shows
DevOps Agent Python SDK Procedures that evolve from deployment failures
Customer Support CrewAI Agent with 5 memory tools, remembers returning customers
Personal Assistant LangChain Cognitive profile + auto-saving chat history
cd examples/devops-agent && pip install -r requirements.txt
export MENGRAM_API_KEY=om-...
python main.py

Use with AI Agents

Mengram works as a persistent memory backend for autonomous agents. Your agent stores what it learns, and recalls it on the next run — getting smarter over time.

from mengram import Mengram

m = Mengram(api_key="om-...")

# Agent completes a task → store what happened
m.add([
    {"role": "user", "content": "Apply to Acme Corp on Greenhouse"},
    {"role": "assistant", "content": "Applied successfully. Had to use React Select workaround for dropdowns."},
])
# → Extracts: fact ("applied to Acme Corp"), episode ("Greenhouse application"),
#   procedure ("React Select dropdown workaround")

# Next run → agent recalls what worked before
context = m.search_all("Greenhouse application tips")
# → Returns past procedures, failures, and successful strategies

# Report outcome → procedures evolve
m.procedure_feedback(proc_id, success=False,
                     context="Dropdown fix stopped working")
# → Procedure auto-evolves to a new version

Works with any agent framework — CrewAI, LangChain, AutoGPT, custom loops. The agent just calls add() after actions and search() before decisions.

Self-Hosted (Ollama)

When running locally with Ollama, use models with 8B+ parameters and 8K+ context window. The extraction prompt is ~4,000 tokens — smaller models will hallucinate or mix examples with real data.

Model Parameters Works?
llama3.1:8b 8B Yes
mistral:7b 7B Yes
gemma2:9b 9B Yes
llama3.1:70b 70B Best
phi4-mini:3.8b 3.8B No — context too small

API Reference

Endpoint Description
POST /v1/add Add memories (auto-extracts all 3 types)
POST /v1/add_text Add memories from plain text
POST /v1/add_file Upload file (PDF, DOCX, TXT, MD) — vision AI extraction
POST /v1/search Semantic search
POST /v1/search/all Unified search (semantic + episodic + procedural)
GET /v1/episodes/search Search events and decisions
GET /v1/procedures/search Search workflows
PATCH /v1/procedures/{id}/feedback Report outcome — triggers evolution
GET /v1/procedures/{id}/history Version history + evolution log
GET /v1/profile Cognitive Profile
GET /v1/triggers Smart Triggers (reminders, contradictions, patterns)
POST /v1/agents/run Memory agents (Curator, Connector, Digest)
GET /v1/me Account info

Full interactive docs: mengram.io/docs

Quota Headers

Every authenticated response includes usage headers:

Header Description
X-Quota-Add-Used Add calls used this month
X-Quota-Add-Limit Add calls allowed this month
X-Quota-Search-Used Search calls used this month
X-Quota-Search-Limit Search calls allowed this month

SDKs expose this via .quota:

m.search("test")
print(m.quota)  # {"add": {"used": 5, "limit": 30}, "search": {"used": 12, "limit": 100}}

Community

License

Apache 2.0 — free for commercial use.


Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mengram_ai-2.25.1.tar.gz (318.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mengram_ai-2.25.1-py3-none-any.whl (322.9 kB view details)

Uploaded Python 3

File details

Details for the file mengram_ai-2.25.1.tar.gz.

File metadata

  • Download URL: mengram_ai-2.25.1.tar.gz
  • Upload date:
  • Size: 318.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for mengram_ai-2.25.1.tar.gz
Algorithm Hash digest
SHA256 edbbf42dba51b7344cb85682df46c7667349bfb04a1ad6b47dc3182051764210
MD5 70ba8da35ef064daf1fffd3a1812f057
BLAKE2b-256 39e4ae4e36c1fa2fe41f4065a56fa5ec433f2c55bc5b40a4daa6bcc7db150a50

See more details on using hashes here.

File details

Details for the file mengram_ai-2.25.1-py3-none-any.whl.

File metadata

  • Download URL: mengram_ai-2.25.1-py3-none-any.whl
  • Upload date:
  • Size: 322.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for mengram_ai-2.25.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6ed343c174585a83cc8e1b201998465df0bcd64cf4465123eea54f36f7cf3f18
MD5 27e83aa537ff69518f70aad6d0c29182
BLAKE2b-256 e50699a86a6b5272999768b5879344d79c13a52597b7cb965ff5b5c7a4242c38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page