AI memory layer for apps. Open-source Mem0 alternative. Like Mem0, but you own your data.
Project description
๐ง Mengram
AI memory layer for apps. Open-source Mem0 alternative.
Build persistent memory for AI agents and apps. Knowledge graph, semantic search, Cloud API & MCP server. Use locally with Obsidian vault or via cloud.
Like Mem0, but you own your data โ and it actually saves your solutions with code, not just "user uses PostgreSQL".
Why Mengram?
| Mem0 | Basic Memory | Mengram | |
|---|---|---|---|
| Storage | Cloud vectors | Flat markdown | Typed knowledge graph in .md |
| Entity types | โ Flat facts | โ One note per chat | โ Person, Project, Technology, Company |
| Relations | โ | โ | โ
works_at, uses, depends_on |
| Rich knowledge | โ | โ | โ Solutions, configs, formulas with code |
| Proactive context | โ | โ | โ Auto-injected โ no manual recall |
| Obsidian graph | โ | Partial | โ
Full [[wikilinks]] + graph view |
| Semantic search | โ Cloud | โ | โ Local embeddings (384D) |
| Own your data | โ Cloud lock-in | โ | โ
Plain .md files |
| LLM agnostic | โ | Partial | โ Claude / GPT / Ollama |
| Pricing | $24/mo+ | $14/mo | Free & open source |
What it actually does
You chat with Claude (or any LLM). Mengram automatically:
- Extracts entities, facts, relationships, and rich knowledge (solutions, commands, configs with code)
- Creates typed
.mdfiles in your Obsidian vault - Links everything with
[[wikilinks]]and YAML frontmatter - Indexes with local vector embeddings for semantic search
- Proactively injects relevant context into every conversation โ no manual recall needed
You: "We fixed the OOM with Redis cache. Config: hikari.pool-size=20"
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ vault/PostgreSQL.md โ
โ type: technology โ
โ โ
โ ## Facts โ
โ - Main database, version 15 โ
โ โ
โ ## Knowledge โ
โ **[solution] Connection pool fix** โ
โ OOM at 200+ WebSocket โ Redis cache โ
โ ```yaml โ
โ spring.datasource.hikari. โ
โ maximum-pool-size: 20 โ
โ ``` โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Next time you ask "How did we fix the OOM?" โ Claude already knows, with the config.
Quick Start
1. Install
pip install mengram-ai[all]
2. Setup (one command)
mengram init
This will:
- Ask for your LLM provider and API key
- Create
~/.mengram/config.yamland vault - Auto-configure Claude Desktop MCP integration
- Tell you to restart Claude Desktop
That's it. Talk to Claude โ it remembers automatically and always has context.
Non-interactive:
mengram init --provider anthropic --api-key sk-ant-...
Other commands:
mengram status # Check setup
mengram stats # Vault statistics
mengram server # Start MCP server manually
Proactive Context (v0.5.0)
The killer feature. Claude Desktop gets your knowledge profile automatically โ no manual attach, no "recall", no "remember what I told you".
How it works:
Claude Desktop starts
โ MCP server reads vault
โ Generates compact knowledge index (scales to 1000+ notes)
โ Injects into Claude's instructions
โ Warms up semantic search model
You open any chat โ Claude already knows:
- Your tech stack, projects, team
- Past solutions with code/configs
- Entity relationships
You ask a question โ Claude auto-calls recall()
โ Gets full details + code artifacts
โ Answers with context
Rich Knowledge (v0.5.0)
Not just "user uses PostgreSQL" โ but solutions with code, commands, formulas, configs.
The LLM automatically chooses the knowledge type based on context:
| Domain | Knowledge types | Example |
|---|---|---|
| Developer | solution, command, config, debug |
HikariCP pool config with YAML |
| Doctor | treatment, lab_result, diagnosis |
Metformin 500mg dosage |
| Scientist | experiment, formula, hypothesis |
Protein denaturation at 60ยฐC |
| Student | formula, example, insight |
Bayes theorem with example |
| Chef | recipe, tip, substitution |
Sourdough hydration ratio |
No configuration needed. The system adapts to any domain.
## Knowledge
**[solution] Connection pool exhaustion fix** (2024-02-10)
OOM at 200+ WebSocket connections โ Redis cache for UserService
โ```yaml
spring.datasource.hikari.maximum-pool-size: 20
spring.datasource.hikari.idle-timeout: 30000
โ```
**[command] Debug database connections** (2024-02-10)
Monitor active PostgreSQL connections
โ```sql
SELECT count(*), state FROM pg_stat_activity GROUP BY state;
โ```
Python SDK (Mem0-compatible API)
from mengram import Memory
m = Memory(
vault_path="./my-brain",
llm_provider="anthropic",
api_key="sk-ant-..."
)
# Remember โ extracts entities, facts, relations, AND knowledge
m.add("I work at Uzum Bank, backend on Spring Boot and PostgreSQL", user_id="ali")
# Semantic search (finds by MEANING, not just keywords)
results = m.search("database issues", user_id="ali")
for r in results:
print(f"{r.memory.name} (score={r.score:.2f})")
print(r.memory.facts)
# Get everything
all_memories = m.get_all(user_id="ali")
# Stats
print(m.stats(user_id="ali"))
Auto-Memory Middleware
Drop-in wrapper that automatically remembers and recalls:
from mengram import Memory
from mengram_middleware import AutoMemory
m = Memory(vault_path="./vault", llm_provider="anthropic", api_key="sk-ant-...")
auto = AutoMemory(memory=m, user_id="ali")
# Automatically: recall context โ inject โ LLM response โ remember new knowledge
response = auto.chat("Help me fix the PostgreSQL connection pool issue")
How It Works
Conversation โ Extractor (LLM) โ Entities + Facts + Relations + Knowledge
โ
Vault Manager โ .md files with [[wikilinks]]
โ
Vector Index โ local embeddings (SQLite)
โ
MCP Server โ instructions (compact index)
โ tools (recall, remember)
โ
Claude Desktop โ auto-context every chat
Semantic Search (Hybrid)
3-level search strategy:
- Vector Search โ
all-MiniLM-L6-v2(80MB, runs locally). Finds "database" when you search "PostgreSQL" โ by meaning, not keywords. - Graph Expansion โ follows
[[wikilinks]]from top results. Found PostgreSQL? Also returns linked Project Alpha. - Text Fallback โ substring match for edge cases.
Entity Types
| Type | Examples |
|---|---|
person |
Team members, contacts |
project |
Services, repos, products |
technology |
PostgreSQL, Spring Boot, Kafka |
company |
Employers, clients, partners |
concept |
Patterns, strategies, ideas |
File Format
---
type: technology
created: 2024-02-10 15:30
updated: 2024-02-11 09:15
tags: [technology]
---
# PostgreSQL
## Facts
- Main database, version 15
- Connection pool issue in [[Project Alpha]]
## Relations
- โ uses [[Project Alpha]]: Main DB
## Knowledge
**[solution] Connection pool exhaustion fix** (2024-02-10)
OOM at 200+ WebSocket โ Redis cache for UserService
โ```yaml
spring.datasource.hikari.maximum-pool-size: 20
โ```
Configuration
# config.yaml
vault_path: "./vault"
llm:
provider: "anthropic" # anthropic | openai | ollama | mock
anthropic:
api_key: "sk-ant-..."
model: "claude-sonnet-4-20250514"
semantic_search:
enabled: true
| Provider | Install | Cost |
|---|---|---|
| Anthropic (Claude) | pip install mengram-ai[anthropic] |
API pricing |
| OpenAI (GPT) | pip install mengram-ai[openai] |
API pricing |
| Ollama (local) | Install ollama | Free |
Roadmap
- Typed entity extraction (person, project, technology, company)
- Obsidian vault with
[[wikilinks]]+ YAML frontmatter - MCP Server for Claude Desktop
- Semantic search with local embeddings
- Hybrid retrieval (vector + graph)
- Mem0-compatible Python SDK
- Auto-memory middleware
- Rich knowledge extraction (solutions, configs, formulas with code)
- Proactive context (auto-injected via MCP instructions)
- Entity deduplication
- Obsidian plugin (TypeScript)
- Web dashboard
- REST API
Contributing
git clone https://github.com/alibaizhanov/mengram
cd mengram
pip install -e ".[all,dev]"
pytest
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mengram_ai-1.0.6.tar.gz.
File metadata
- Download URL: mengram_ai-1.0.6.tar.gz
- Upload date:
- Size: 65.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df340accc5effb16d59e27c0e91a499041235946130434f793f6e2035b841112
|
|
| MD5 |
d64f36aec9a66f662d251ddf77b3bfac
|
|
| BLAKE2b-256 |
b32a6df4e2e98eeab4fa6598020defc54e0fd21e4c94e02db2872adba1a6853b
|
File details
Details for the file mengram_ai-1.0.6-py3-none-any.whl.
File metadata
- Download URL: mengram_ai-1.0.6-py3-none-any.whl
- Upload date:
- Size: 73.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70f42c4adb2acdfeb076c1d7d0bf0491b218b460637da156296e71f259a35e61
|
|
| MD5 |
cd0685b56a9157b8425feeab888c626c
|
|
| BLAKE2b-256 |
14599a6353a54d44cbea801a470d8812a56d5dd47fdfaa94f326d439f1fad2e3
|