Skip to main content

AI memory layer for apps. Open-source Mem0 alternative. Like Mem0, but you own your data.

Project description

๐Ÿง  Mengram

AI memory layer for apps. Open-source Mem0 alternative.

Build persistent memory for AI agents and apps. Knowledge graph, semantic search, Cloud API & MCP server. Use locally with Obsidian vault or via cloud.

Like Mem0, but you own your data โ€” and it actually saves your solutions with code, not just "user uses PostgreSQL".


Why Mengram?

Mem0 Basic Memory Mengram
Storage Cloud vectors Flat markdown Typed knowledge graph in .md
Entity types โŒ Flat facts โŒ One note per chat โœ… Person, Project, Technology, Company
Relations โŒ โŒ โœ… works_at, uses, depends_on
Rich knowledge โŒ โŒ โœ… Solutions, configs, formulas with code
Proactive context โŒ โŒ โœ… Auto-injected โ€” no manual recall
Obsidian graph โŒ Partial โœ… Full [[wikilinks]] + graph view
Semantic search โœ… Cloud โŒ โœ… Local embeddings (384D)
Own your data โŒ Cloud lock-in โœ… โœ… Plain .md files
LLM agnostic โŒ Partial โœ… Claude / GPT / Ollama
Pricing $24/mo+ $14/mo Free & open source

What it actually does

You chat with Claude (or any LLM). Mengram automatically:

  1. Extracts entities, facts, relationships, and rich knowledge (solutions, commands, configs with code)
  2. Creates typed .md files in your Obsidian vault
  3. Links everything with [[wikilinks]] and YAML frontmatter
  4. Indexes with local vector embeddings for semantic search
  5. Proactively injects relevant context into every conversation โ€” no manual recall needed
You: "We fixed the OOM with Redis cache. Config: hikari.pool-size=20"

         โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
         โ”‚  vault/PostgreSQL.md                 โ”‚
         โ”‚  type: technology                    โ”‚
         โ”‚                                      โ”‚
         โ”‚  ## Facts                             โ”‚
         โ”‚  - Main database, version 15          โ”‚
         โ”‚                                      โ”‚
         โ”‚  ## Knowledge                         โ”‚
         โ”‚  **[solution] Connection pool fix**    โ”‚
         โ”‚  OOM at 200+ WebSocket โ†’ Redis cache  โ”‚
         โ”‚  ```yaml                              โ”‚
         โ”‚  spring.datasource.hikari.            โ”‚
         โ”‚    maximum-pool-size: 20              โ”‚
         โ”‚  ```                                  โ”‚
         โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Next time you ask "How did we fix the OOM?" โ†’ Claude already knows, with the config.


Quick Start

1. Install

pip install mengram-ai[all]

2. Setup (one command)

mengram init

This will:

  • Ask for your LLM provider and API key
  • Create ~/.mengram/config.yaml and vault
  • Auto-configure Claude Desktop MCP integration
  • Tell you to restart Claude Desktop

That's it. Talk to Claude โ€” it remembers automatically and always has context.

Non-interactive:

mengram init --provider anthropic --api-key sk-ant-...

Other commands:

mengram status    # Check setup
mengram stats     # Vault statistics
mengram server    # Start MCP server manually

Proactive Context (v0.5.0)

The killer feature. Claude Desktop gets your knowledge profile automatically โ€” no manual attach, no "recall", no "remember what I told you".

How it works:

Claude Desktop starts
  โ†’ MCP server reads vault
  โ†’ Generates compact knowledge index (scales to 1000+ notes)
  โ†’ Injects into Claude's instructions
  โ†’ Warms up semantic search model

You open any chat โ†’ Claude already knows:
  - Your tech stack, projects, team
  - Past solutions with code/configs
  - Entity relationships

You ask a question โ†’ Claude auto-calls recall()
  โ†’ Gets full details + code artifacts
  โ†’ Answers with context

Rich Knowledge (v0.5.0)

Not just "user uses PostgreSQL" โ€” but solutions with code, commands, formulas, configs.

The LLM automatically chooses the knowledge type based on context:

Domain Knowledge types Example
Developer solution, command, config, debug HikariCP pool config with YAML
Doctor treatment, lab_result, diagnosis Metformin 500mg dosage
Scientist experiment, formula, hypothesis Protein denaturation at 60ยฐC
Student formula, example, insight Bayes theorem with example
Chef recipe, tip, substitution Sourdough hydration ratio

No configuration needed. The system adapts to any domain.

## Knowledge

**[solution] Connection pool exhaustion fix** (2024-02-10)
OOM at 200+ WebSocket connections โ†’ Redis cache for UserService
โ€‹```yaml
spring.datasource.hikari.maximum-pool-size: 20
spring.datasource.hikari.idle-timeout: 30000
โ€‹```

**[command] Debug database connections** (2024-02-10)
Monitor active PostgreSQL connections
โ€‹```sql
SELECT count(*), state FROM pg_stat_activity GROUP BY state;
โ€‹```

Python SDK (Mem0-compatible API)

from mengram import Memory

m = Memory(
    vault_path="./my-brain",
    llm_provider="anthropic",
    api_key="sk-ant-..."
)

# Remember โ€” extracts entities, facts, relations, AND knowledge
m.add("I work at Uzum Bank, backend on Spring Boot and PostgreSQL", user_id="ali")

# Semantic search (finds by MEANING, not just keywords)
results = m.search("database issues", user_id="ali")
for r in results:
    print(f"{r.memory.name} (score={r.score:.2f})")
    print(r.memory.facts)

# Get everything
all_memories = m.get_all(user_id="ali")

# Stats
print(m.stats(user_id="ali"))

Auto-Memory Middleware

Drop-in wrapper that automatically remembers and recalls:

from mengram import Memory
from mengram_middleware import AutoMemory

m = Memory(vault_path="./vault", llm_provider="anthropic", api_key="sk-ant-...")
auto = AutoMemory(memory=m, user_id="ali")

# Automatically: recall context โ†’ inject โ†’ LLM response โ†’ remember new knowledge
response = auto.chat("Help me fix the PostgreSQL connection pool issue")

How It Works

Conversation โ†’ Extractor (LLM) โ†’ Entities + Facts + Relations + Knowledge
                                          โ†“
                                   Vault Manager โ†’ .md files with [[wikilinks]]
                                          โ†“
                                   Vector Index โ†’ local embeddings (SQLite)
                                          โ†“
                                   MCP Server โ†’ instructions (compact index)
                                              โ†’ tools (recall, remember)
                                          โ†“
                                   Claude Desktop โ†’ auto-context every chat

Semantic Search (Hybrid)

3-level search strategy:

  1. Vector Search โ€” all-MiniLM-L6-v2 (80MB, runs locally). Finds "database" when you search "PostgreSQL" โ€” by meaning, not keywords.
  2. Graph Expansion โ€” follows [[wikilinks]] from top results. Found PostgreSQL? Also returns linked Project Alpha.
  3. Text Fallback โ€” substring match for edge cases.

Entity Types

Type Examples
person Team members, contacts
project Services, repos, products
technology PostgreSQL, Spring Boot, Kafka
company Employers, clients, partners
concept Patterns, strategies, ideas

File Format

---
type: technology
created: 2024-02-10 15:30
updated: 2024-02-11 09:15
tags: [technology]
---

# PostgreSQL

## Facts

- Main database, version 15
- Connection pool issue in [[Project Alpha]]

## Relations

- โ† uses [[Project Alpha]]: Main DB

## Knowledge

**[solution] Connection pool exhaustion fix** (2024-02-10)
OOM at 200+ WebSocket โ†’ Redis cache for UserService
โ€‹```yaml
spring.datasource.hikari.maximum-pool-size: 20
โ€‹```

Configuration

# config.yaml
vault_path: "./vault"

llm:
  provider: "anthropic"  # anthropic | openai | ollama | mock
  anthropic:
    api_key: "sk-ant-..."
    model: "claude-sonnet-4-20250514"

semantic_search:
  enabled: true
Provider Install Cost
Anthropic (Claude) pip install mengram-ai[anthropic] API pricing
OpenAI (GPT) pip install mengram-ai[openai] API pricing
Ollama (local) Install ollama Free

Roadmap

  • Typed entity extraction (person, project, technology, company)
  • Obsidian vault with [[wikilinks]] + YAML frontmatter
  • MCP Server for Claude Desktop
  • Semantic search with local embeddings
  • Hybrid retrieval (vector + graph)
  • Mem0-compatible Python SDK
  • Auto-memory middleware
  • Rich knowledge extraction (solutions, configs, formulas with code)
  • Proactive context (auto-injected via MCP instructions)
  • Entity deduplication
  • Obsidian plugin (TypeScript)
  • Web dashboard
  • REST API

Contributing

git clone https://github.com/alibaizhanov/mengram
cd mengram
pip install -e ".[all,dev]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mengram_ai-1.5.3.tar.gz (76.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mengram_ai-1.5.3-py3-none-any.whl (84.2 kB view details)

Uploaded Python 3

File details

Details for the file mengram_ai-1.5.3.tar.gz.

File metadata

  • Download URL: mengram_ai-1.5.3.tar.gz
  • Upload date:
  • Size: 76.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for mengram_ai-1.5.3.tar.gz
Algorithm Hash digest
SHA256 3abe510c7705e926c03f95f08dcff5b8c86179626736d0f5eeaf59b4ad69ca5c
MD5 e353eae77ea9d169d45b5ea4082c375c
BLAKE2b-256 4e76363d6a9dc9543c648d6c2afccd115f39d9c06e99e3bb732926baffcc2750

See more details on using hashes here.

File details

Details for the file mengram_ai-1.5.3-py3-none-any.whl.

File metadata

  • Download URL: mengram_ai-1.5.3-py3-none-any.whl
  • Upload date:
  • Size: 84.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for mengram_ai-1.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 37693b575e93b394ed266feb6f5ea4675f2578bb1289c6d17f5af563891dd00a
MD5 9fa9b9c7b50fb67ec7bfa87315061f21
BLAKE2b-256 0fa13b42bac509fd46d1faa4bad84722259f03a5b322dbd3b43294c133224762

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page