Human-like memory for AI — semantic, episodic & procedural. Cognitive Profile, unified search, memory agents. Free open-source Mem0 alternative.
Project description
Mengram — Human-Like Memory for AI
The only AI memory API with 3 memory types: semantic, episodic, and procedural. Your AI remembers facts, events, and learned workflows — just like a human brain.
Website · Dashboard · API Docs · PyPI · npm
Why Mengram?
| Mengram | Mem0 | Supermemory | |
|---|---|---|---|
| Semantic Memory (facts) | ✅ | ✅ | ✅ |
| Episodic Memory (events) | ✅ | ❌ | ❌ |
| Procedural Memory (workflows) | ✅ | ❌ | ❌ |
| Cognitive Profile | ✅ | ❌ | ❌ |
| Unified Search (all 3 types) | ✅ | ❌ | ❌ |
| Knowledge Graph | ✅ | ✅ | ❌ |
| Autonomous Agents | ✅ Curator, Connector, Digest | ❌ | ❌ |
| Team Shared Memory | ✅ | ❌ | ✅ |
| AI Reflections | ✅ | ❌ | ❌ |
| Webhooks | ✅ | ✅ | ✅ |
| MCP Server | ✅ Claude Desktop, Cursor, Windsurf | ✅ | ❌ |
| LangChain Integration | ✅ | ❌ | ❌ |
| Python & JS SDK | ✅ | ✅ | ✅ |
| Self-hostable | ✅ | ✅ | ✅ |
| Price | Free | $19-249/mo | Enterprise |
3 Memory Types
Mengram automatically extracts all 3 types from a single add() call:
🧠 Semantic — Facts, preferences, skills: "uses Python", "prefers dark mode"
📝 Episodic — Events, decisions, experiences: "Debugged Railway deployment for 3 hours, fixed pgvector issue"
⚙️ Procedural — Learned workflows, processes: "Deploy: build → twine upload → npm publish → git push"
# One call extracts all 3 types automatically
m.add([
{"role": "user", "content": "Fixed the auth bug today. Problem was API key cache TTL. My debug process: check Railway logs, reproduce locally, fix and deploy."},
])
# → Semantic: "API key caching caused auth bug"
# → Episodic: "Debugged auth bug, fixed cache TTL"
# → Procedural: "Debug process: logs → reproduce → fix → deploy"
Quick Start (60 seconds)
1. Get API key
Sign up at mengram.io — free, no credit card.
2. Install
pip install mengram-ai # Python
npm install mengram-ai # JavaScript / TypeScript
3. Connect to Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"mengram": {
"command": "mengram",
"args": ["server", "--cloud"],
"env": {
"MENGRAM_API_KEY": "your-key-here"
}
}
}
}
Done. Claude now has persistent memory with all 3 types.
Python SDK
from mengram.cloud.client import CloudMemory
m = CloudMemory(api_key="om-...")
# Add memories — auto-extracts facts, events, workflows
m.add([
{"role": "user", "content": "I deployed Mengram on Railway with PostgreSQL 15"},
{"role": "assistant", "content": "Great, noted the deployment setup."}
], user_id="ali")
# Semantic search (classic)
results = m.search("deployment setup", user_id="ali")
# Episodic search — what happened?
events = m.episodes(query="deployment", user_id="ali")
# → [{summary: "Deployed on Railway", outcome: "Success", participants: [...]}]
# Procedural search — how to do it?
procs = m.procedures(query="deploy", user_id="ali")
# → [{name: "Deploy Mengram", steps: [...], success_count: 5}]
# Unified search — all 3 types at once
all_results = m.search_all("deployment issues", user_id="ali")
# → {semantic: [...], episodic: [...], procedural: [...]}
# Procedure feedback — AI learns what works
m.procedure_feedback(proc_id, success=True)
# Cognitive Profile — instant personalization
profile = m.get_profile("ali")
# → {system_prompt: "You are talking to Ali, a developer in Almaty..."}
# Use profile with any LLM
import openai
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": profile["system_prompt"]},
{"role": "user", "content": "What should I work on next?"}
]
)
# Memory agents
m.run_agents(agent="all", auto_fix=True)
# Team memory
team = m.create_team("Backend Team")
m.share_memory("Redis", team_id=team["id"])
JavaScript / TypeScript SDK
const { MengramClient } = require('mengram-ai');
const m = new MengramClient('om-...');
// Add memories — extracts all 3 types
await m.add([
{ role: 'user', content: 'Fixed OOM with Redis cache' },
], { userId: 'ali' });
// Episodic — what happened?
const events = await m.episodes({ query: 'OOM fix' });
// Procedural — how to do it?
const procs = await m.procedures({ query: 'cache setup' });
// Unified search — all 3 types
const all = await m.searchAll('database issues');
// → { semantic: [...], episodic: [...], procedural: [...] }
// Procedure feedback — AI learns
await m.procedureFeedback(procId, { success: true });
// Cognitive Profile
const profile = await m.getProfile('ali');
// → { system_prompt: "You are talking to Ali..." }
Full TypeScript types included with Episode, Procedure, and UnifiedSearchResult interfaces.
Cognitive Profile
One API call generates a ready-to-use system prompt from all 3 memory types:
profile = m.get_profile("ali")
print(profile["system_prompt"])
Output:
You are talking to Ali, a 22-year-old developer in Almaty building Mengram.
He uses Python, PostgreSQL, and Railway. Recently: debugged pgvector deployment,
researched competitors Mem0 and Supermemory, designed freemium pricing.
Workflows: deploys via build→twine→npm→git, prefers iterative shipping.
Communicate in Russian/English, direct style, focus on practical next steps.
Insert into any LLM's system prompt for instant personalization. Replace your RAG pipeline.
LangChain Integration
Drop-in replacement for LangChain's memory. Instead of returning raw message history, Mengram returns relevant knowledge from all 3 memory types.
pip install mengram-ai[langchain]
LCEL (recommended):
from mengram.integrations.langchain import MengramChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
chain_with_memory = RunnableWithMessageHistory(
chain,
lambda session_id: MengramChatMessageHistory(
api_key="om-...", session_id=session_id, user_id="ali"
),
input_messages_key="input",
history_messages_key="history",
)
ConversationChain (legacy):
from mengram.integrations.langchain import MengramMemory
# Basic — search-based context
memory = MengramMemory(api_key="om-...", user_id="ali")
# With Cognitive Profile — full user personalization
memory = MengramMemory(api_key="om-...", user_id="ali", use_profile=True)
chain = ConversationChain(llm=llm, memory=memory)
chain.predict(input="I deployed my app on Railway")
# Next call — Mengram searches all 3 memory types for relevant context
chain.predict(input="How did my last deployment go?")
# → Memory provides: facts about Railway, the deployment event, deploy workflow
vs ConversationBufferMemory:
| ConversationBufferMemory | MengramMemory | |
|---|---|---|
| Storage | RAM (lost on restart) | Persistent (PostgreSQL) |
| Context | Last N messages (raw) | Relevant knowledge (semantic search) |
| Memory types | 1 (messages) | 3 (semantic + episodic + procedural) |
| Cross-session | ❌ | ✅ |
| Personalization | ❌ | ✅ Cognitive Profile |
Memory Categories
Separate memory by user, agent, session, and application:
m.add(messages, user_id="ali") # User's memory
m.add(messages, user_id="ali", agent_id="support-bot") # Agent's memory
m.add(messages, user_id="ali", run_id="session-123") # Session-scoped
m.add(messages, user_id="ali", app_id="helpdesk") # App-scoped
Memory Agents
Three autonomous agents that analyze your memory:
🧹 Curator — Finds contradictions, stale facts, duplicates. Auto-cleans with auto_fix=True.
🔗 Connector — Discovers hidden connections, behavioral patterns, skill clusters.
📰 Digest — Weekly summary with headlines, trends, and recommendations.
API Endpoints
| Endpoint | Description |
|---|---|
POST /v1/add |
Add memories (auto-extracts all 3 types) |
POST /v1/search |
Semantic search |
POST /v1/search/all |
Unified search (semantic + episodic + procedural) |
GET /v1/episodes |
List episodic memories |
GET /v1/episodes/search |
Search episodes by meaning |
GET /v1/procedures |
List procedural memories |
GET /v1/procedures/search |
Search procedures by trigger |
PATCH /v1/procedures/{id}/feedback |
Record success/failure |
GET /v1/profile |
Cognitive Profile (system prompt) |
GET /v1/profile/{user_id} |
Profile for specific user |
POST /v1/agents/run |
Run memory agents |
GET /v1/insights |
AI-generated insights |
GET /v1/graph |
Knowledge graph |
GET /v1/timeline |
Temporal search |
POST /v1/teams |
Create team |
POST /v1/webhooks |
Create webhook |
GET /v1/keys |
List API keys |
GET /v1/stats |
Usage statistics |
Full docs: https://mengram.io/docs
Architecture
┌──────────────────────────────────────┐
│ Your AI Clients │
│ Claude Desktop · Cursor · Windsurf │
└──────────────┬───────────────────────┘
│ MCP / REST API
┌──────────────▼───────────────────────┐
│ Mengram Cloud API │
│ Extraction · Re-ranking · Search │
├──────────────────────────────────────┤
│ 3 Memory Types │
│ 🧠 Semantic · 📝 Episodic · ⚙️ Proc │
├──────────────────────────────────────┤
│ Memory Agents Layer │
│ 🧹 Curator · 🔗 Connector · 📰 Digest│
├──────────────────────────────────────┤
│ Storage Layer │
│ PostgreSQL · pgvector · Teams │
│ Webhooks · Reflections · Graph │
└──────────────────────────────────────┘
License
MIT
Built by Ali Baizhanov
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mengram_ai-2.5.3.tar.gz.
File metadata
- Download URL: mengram_ai-2.5.3.tar.gz
- Upload date:
- Size: 113.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8c0a666bdd0ce93efd4b93c7c9847a9236dfdc228f3bb91b84cdcfe7cb7bdd3
|
|
| MD5 |
9d6b70b6c03afa45d61d2f6cf5b2e4d8
|
|
| BLAKE2b-256 |
d4c35b32434a034932c8abe705616e90a7f32fab7ee769b5ce6f23d6a09ef1da
|
File details
Details for the file mengram_ai-2.5.3-py3-none-any.whl.
File metadata
- Download URL: mengram_ai-2.5.3-py3-none-any.whl
- Upload date:
- Size: 121.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a1b966e0eb380fa30e3807f6aaafe70336c1f54328d650839b23a1afa92420ce
|
|
| MD5 |
3010fc2fb3dd42123c3c67c7bc39ae39
|
|
| BLAKE2b-256 |
887432af7bf509f1f9249e0e55eb253f8e8416ede6442b009378194502ef3c1e
|