Human-like memory for AI — semantic, episodic & procedural. Experience-driven procedures, Cognitive Profile, unified search, memory agents. Free open-source Mem0 alternative.
Project description
Mengram
The memory layer for AI agents that learns from experience
Your agents remember facts, events, and workflows — and procedures improve automatically when they fail.
Website · Get API Key · API Docs · Examples
Why Mengram?
Every AI memory tool stores facts. Mengram stores 3 types — and procedures evolve from failures.
| Mengram | Mem0 | Letta | Zep | |
|---|---|---|---|---|
| Semantic Memory (facts) | ✅ | ✅ | ✅ | ✅ |
| Episodic Memory (events) | ✅ | ❌ | Partial | ❌ |
| Procedural Memory (workflows) | ✅ | ❌ | ❌ | ❌ |
| Experience-Driven Evolution | ✅ | ❌ | ❌ | ❌ |
| Cognitive Profile | ✅ | ❌ | ❌ | ❌ |
| Knowledge Graph | ✅ | ✅ | ✅ | ✅ |
| LangChain / CrewAI | ✅ | Partial | ❌ | ✅ |
| MCP Server | ✅ | ✅ | ✅ | ❌ |
| Price | Free | $19–249/mo | Free (self-host) | Enterprise |
Quick Start
pip install mengram-ai
from cloud.client import CloudMemory
m = CloudMemory(api_key="om-...") # Free key → mengram.io/dashboard
# Add a conversation — Mengram auto-extracts facts, events, and workflows
m.add([
{"role": "user", "content": "Deployed to Railway today. Build passed but forgot migrations — DB crashed. Fixed by adding a pre-deploy check."},
])
# Search facts
m.search("deployment setup")
# Search events — what happened?
m.episodes(query="deployment")
# → [{summary: "Deployed to Railway, DB crashed due to missing migrations", outcome: "resolved", ...}]
# Search workflows — how to do it?
m.procedures(query="deploy")
# → [{name: "Deploy to Railway", steps: ["build", "run migrations", "push", "verify"], ...}]
# Unified search — all 3 types at once
m.search_all("deployment issues")
# → {semantic: [...], episodic: [...], procedural: [...]}
JavaScript / TypeScript:
npm install mengram-ai
const { MengramClient } = require('mengram-ai');
const m = new MengramClient('om-...');
await m.add([{ role: 'user', content: 'Fixed OOM with Redis cache' }]);
const all = await m.searchAll('database issues');
// → { semantic: [...], episodic: [...], procedural: [...] }
Experience-Driven Procedures
The feature no one else has. Procedures learn from real outcomes — not static runbooks.
Week 1: "Deploy" → build → push → deploy
↓ FAILURE: forgot migrations, DB crashed
Week 2: "Deploy" v2 → build → run migrations → push → deploy
↓ FAILURE: OOM on Railway
Week 3: "Deploy" v3 → build → run migrations → check memory → push → deploy ✅
This happens automatically when you report failures:
# Report failure with context → procedure evolves to a new version
m.procedure_feedback(proc_id, success=False,
context="OOM error on step 3", failed_at_step=3)
# View version history
history = m.procedure_history(proc_id)
# → {versions: [v1, v2, v3], evolution_log: [{change: "step_added", reason: "prevent OOM"}]}
Or fully automatic — add conversations and Mengram detects failures, links them to procedures, and evolves:
m.add([{"role": "user", "content": "Deploy to Railway failed again — OOM on the build step"}])
# → Episode auto-linked to "Deploy" procedure → failure detected → v3 created
Cognitive Profile
One API call generates a system prompt from all your memories:
profile = m.get_profile()
# → "You are talking to Ali, a developer in Almaty building Mengram.
# He uses Python, PostgreSQL, and Railway. Recently debugged pgvector deployment.
# Workflows: deploys via build→twine→npm→git. Communicate directly, focus on practical next steps."
Insert into any LLM's system prompt for instant personalization.
Integrations
MCP Server (Claude Desktop, Cursor, Windsurf)
{
"mcpServers": {
"mengram": {
"command": "mengram",
"args": ["server", "--cloud"],
"env": { "MENGRAM_API_KEY": "om-..." }
}
}
}
LangChain
from integrations.langchain import MengramChatMessageHistory, MengramRetriever
# Drop-in message history — auto-saves to Mengram
history = MengramChatMessageHistory(api_key="om-...", session_id="session-1")
# RAG retriever — searches all 3 memory types
retriever = MengramRetriever(api_key="om-...")
CrewAI
from integrations.crewai import create_mengram_tools
tools = create_mengram_tools(api_key="om-...")
# → 5 tools: search, remember, profile, save_workflow, workflow_feedback
agent = Agent(role="Support Engineer", tools=tools)
Agent Templates
Ready-to-run examples — clone, set API key, run in 5 minutes:
| Template | Stack | What it shows |
|---|---|---|
| DevOps Agent | Python SDK | Procedures that evolve from deployment failures |
| Customer Support | CrewAI | Agent with 5 memory tools, remembers returning customers |
| Personal Assistant | LangChain | Cognitive profile + auto-saving chat history |
cd examples/devops-agent && pip install -r requirements.txt
export MENGRAM_API_KEY=om-...
python main.py
API Reference
All endpoints require Authorization: Bearer om-... — your key identifies you, no user_id needed.
| Endpoint | Description |
|---|---|
POST /v1/add |
Add memories (auto-extracts all 3 types) |
POST /v1/search |
Semantic search |
POST /v1/search/all |
Unified search (all 3 types) |
GET /v1/episodes/search |
Search episodic memories |
GET /v1/procedures/search |
Search procedural memories |
PATCH /v1/procedures/{id}/feedback |
Report success/failure → triggers evolution |
GET /v1/procedures/{id}/history |
Version history + evolution log |
GET /v1/profile |
Cognitive Profile |
GET /v1/triggers |
Smart Triggers (reminders, contradictions, patterns) |
POST /v1/agents/run |
Run memory agents (Curator, Connector, Digest) |
Full interactive docs: mengram.io/docs
License
Apache 2.0 — free for commercial use.
Built by Ali Baizhanov · mengram.io
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mengram_ai-2.7.4.tar.gz.
File metadata
- Download URL: mengram_ai-2.7.4.tar.gz
- Upload date:
- Size: 125.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
863a28656b2d67fcb07cfa97345bc7a4fb93041f156d80a021985cedd27024e1
|
|
| MD5 |
47c6c2d289358aa0206ff41d23b40890
|
|
| BLAKE2b-256 |
dec3c3a47c8644fed7456695fa4d690b1633987edb4cd3689308b700c3813e89
|
File details
Details for the file mengram_ai-2.7.4-py3-none-any.whl.
File metadata
- Download URL: mengram_ai-2.7.4-py3-none-any.whl
- Upload date:
- Size: 134.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c3a919c25c94d9057a1cda701ac8075d6d911ec81a2078c2f2690d9709c8091
|
|
| MD5 |
bd377dd41fec24771c21b2971f017823
|
|
| BLAKE2b-256 |
68e361134f3ce8b860f2951925d37d949a173687405941d2a10a14a3c982a7fa
|