Memory that helps AI agents learn from their mistakes. Store beliefs, search with semantic similarity, track decision traces.
Project description
Memory that helps AI agents learn from their mistakes
Not a vector store with a wrapper. A three-layer cognitive engine that distills raw events into episodes, crystallizes them into beliefs, and tracks how those beliefs evolve — with background consolidation that improves memory while your agents sleep.
Table of Contents
- Installation
- Quick Start (30 seconds)
- Authentication
- Core Methods
- Decisions & Reasoning Traces
- Entities & Knowledge Graph
- Cognitive Sidecar (Always-On Memory)
- Memory Intelligence
- Error Handling
- Async Client
- MCP Server (Claude / Cursor)
- CLI
- Configuration
- How It Works
- Integrations
Installation
pip install memgraph-sdk
With optional extras:
pip install "memgraph-sdk[async]" # Async client (httpx)
pip install "memgraph-sdk[mcp]" # MCP server for Claude/Cursor
pip install "memgraph-sdk[all]" # Everything
Upgrading: Use pip install --upgrade memgraph-sdk (not --force-reinstall, which can cause dependency conflicts with other packages like CrewAI).
Works with: CrewAI, LangChain, OpenAI SDK, LlamaIndex — tested in shared environments.
Quick Start (30 seconds)
Step 1: Get your API key — sign up at memgraph.ai, or via CLI:
# Cloud (recommended)
pip install memgraph-sdk
memgraph setup --key mg_your_api_key
# Self-hosted
docker compose up -d # Start PostgreSQL + Memgraph
memgraph setup --key mg_your_key # Point to your server
Your API key starts with
mg_. Find it in Settings > API Keys after signing up.
Step 2: Use it:
from memgraph_sdk import MemgraphClient
mg = MemgraphClient(api_key="mg_your_api_key")
# Store a memory (immediately searchable)
mg.remember("Customer prefers dark mode and uses PyTorch", user_id="alice")
# Search memories (returns scored results)
result = mg.search("What does Alice prefer?", user_id="alice")
print(result["results"][0]["content"])
# → "Customer prefers dark mode and uses PyTorch" (score: 0.78)
# Get all beliefs for a user
beliefs = mg.get_beliefs(user_id="alice")
Three lines to set up. The tenant_id is resolved automatically from your API key.
Authentication
export MEMGRAPH_API_KEY=mg_your_api_key
import os
from memgraph_sdk import MemgraphClient
mg = MemgraphClient(api_key=os.environ["MEMGRAPH_API_KEY"])
Get your API key at memgraph.ai — sign up and it's on the Settings > API Keys page.
Core Methods
remember() — Immediate storage
Creates a belief directly with a vector embedding. Immediately searchable.
mg.remember(
"User prefers dark mode",
user_id="alice",
category="preference", # "general", "decision", "architecture", "bug_fix", "preference"
domain="general", # optional domain tag
confidence=0.90, # 0.0 - 1.0 (default: 0.90)
)
add() — Async extraction pipeline
Sends text through the background extraction pipeline (entity extraction, belief crystallization, episode grouping). May take 5-10 seconds before results are searchable.
mg.add("Full conversation text here", user_id="alice")
Use
remember()when you need immediate searchability. Useadd()when you want the full extraction pipeline (entities, episodes, beliefs from raw text).
search() — Semantic memory retrieval
Returns scored results with semantic similarity, recency, confidence, frequency, and keyword signals.
result = mg.search("UI preferences", user_id="alice")
# Returns:
# {
# "results": [
# {"content": "User prefers dark mode", "score": 0.76, "metadata": {"key": "...", "domain": "general"}},
# ],
# "total": 1
# }
Optional parameters: agent_id (scope to a specific agent), limit (default 10).
get_beliefs() — List all beliefs
beliefs = mg.get_beliefs(user_id="alice", limit=50)
forget() / forget_all()
mg.forget(belief_id="uuid-of-the-belief") # Delete one belief by ID
mg.forget_all(user_id="alice") # Delete all beliefs for a user
mg.forget_all(user_id="alice", domain="work") # Delete beliefs in a domain
belief_history() / belief_timeline()
# How a specific belief changed over time
history = mg.belief_history(user_id="alice", key="preference_dark_mode_abc123")
# Timeline of all belief changes
timeline = mg.belief_timeline(user_id="alice", domain="work")
Context manager
with MemgraphClient(api_key="mg_your_key") as mg:
mg.remember("User likes Python", user_id="alice")
# Session closed automatically
Decisions & Reasoning Traces
Record, inspect, and debug AI agent decisions. This is Memgraph's unique feature — no other memory system tracks why your agent made a decision.
Record a decision
decision = mg.record_decision(
goal="Choose database for analytics service",
reasoning_steps=[
{"step": 1, "description": "Evaluated PostgreSQL vs MongoDB vs ClickHouse"},
{"step": 2, "description": "Ran cost analysis — $50/mo vs $200/mo vs $150/mo"},
{"step": 3, "description": "Checked team expertise — strong PostgreSQL skills"},
],
tools_used=[
{"tool_name": "benchmark_runner", "tool_input": "pg vs mongo", "tool_output": "pg wins"},
{"tool_name": "cost_calculator", "tool_input": "3 options", "tool_output": "$50/mo"},
],
beliefs_used=["PostgreSQL is our production DB", "Team has PostgreSQL expertise"],
confidence=0.92,
outcome="SUCCESS", # SUCCESS, FAILURE, PARTIAL, UNKNOWN, REVERTED
outcome_assessment="PostgreSQL selected, 3x faster than MongoDB for our workload",
agent_id="my-agent",
user_id="alice",
)
print(decision["id"]) # UUID of the decision
Field reference for reasoning_steps:
| Field | Type | Required | Description |
|---|---|---|---|
step |
int | yes | Step number |
description |
str | yes | What was done |
tool |
str | no | Tool used in this step |
input |
any | no | Input to the tool |
output |
any | no | Output from the tool |
confidence |
float | no | Step-level confidence |
Field reference for tools_used:
| Field | Type | Required | Description |
|---|---|---|---|
tool_name |
str | yes | Name of the tool |
tool_input |
any | no | What was passed to the tool |
tool_output |
any | no | What the tool returned |
Inspect & explain decisions
# Get decision by ID
d = mg.get_decision(decision["id"])
# Get full explanation (reasoning + beliefs + context snapshot)
explanation = mg.explain_decision(decision["id"])
# List all decisions (with optional filters)
all_decisions = mg.list_decisions(agent_id="my-agent", outcome="FAILURE", limit=20)
Entities & Knowledge Graph
Build a knowledge graph of people, organizations, products, and concepts.
# Create entities
person = mg.create_entity(
name="John Smith",
entity_type="person",
properties={"role": "tech lead", "preference": "TypeScript"},
)
org = mg.create_entity(
name="Acme Corp",
entity_type="organization",
properties={"industry": "technology"},
)
# Create a relationship
mg.create_relationship(
source_entity_id=person["id"],
target_entity_id=org["id"],
relation_type="works_at",
confidence=0.95,
valid_from="2025-01-01", # optional temporal bounds
)
# Search entities
results = mg.search_entities("tech lead")
# Traverse the graph
graph = mg.traverse_graph(entity_ids=[person["id"]], max_depth=2)
# List & manage
entities = mg.list_entities()
relationships = mg.list_relationships(entity_id=person["id"])
mg.delete_entity(person["id"])
Cognitive Sidecar (Always-On Memory)
Drop-in middleware that auto-recalls before every LLM call and auto-learns after.
# Pre-flight: recall relevant memories before sending to LLM
context = mg.sidecar_pre_flight(
message="What database should I use?",
user_id="alice",
token_budget=4000, # max tokens for injected context
)
# → Returns memory context to inject as system message
# Post-flight: extract learnable signals from the conversation
mg.sidecar_post_flight(
messages=[
{"role": "user", "content": "What database should I use?"},
{"role": "assistant", "content": "PostgreSQL with pgvector."},
],
user_id="alice",
)
# → Learning happens in background, returns immediately
# Process: combined pre-flight + post-flight in one call (recommended)
result = mg.sidecar_process(
messages=[
{"role": "user", "content": "What database should I use?"},
{"role": "assistant", "content": "PostgreSQL with pgvector."},
],
user_id="alice",
)
Memory Intelligence
# Health check
mg.ping()
# Memory health stats (belief count, episode count, etc.)
mg.health()
# MCIS — Memgraph Cognitive Integrity Score (0-100)
score = mg.mcis()
# → {"mcis": 79.3, "grade": "B", "sub_scores": {"accuracy": 100, ...}}
# MCIS history over time
mg.mcis_history()
# Contradiction detection
mg.contradictions()
# Evaluate retrieval quality for a query
mg.evaluate("What is our database?", user_id="alice")
# Run a benchmark scenario
mg.benchmark("contradiction_detection")
Error Handling
from memgraph_sdk.exceptions import (
MemgraphAuthError, # 401/403 — bad API key
MemgraphConnectionError, # Network error / timeout
MemgraphRateLimitError, # 429 — e.retry_after has wait time
MemgraphValidationError, # 422 — bad request parameters
MemgraphAPIError, # 5xx — server error (auto-retried)
)
try:
result = mg.search("query", user_id="alice")
except MemgraphRateLimitError as e:
print(f"Rate limited. Retry in {e.retry_after}s")
except MemgraphAuthError:
print("Check your MEMGRAPH_API_KEY")
The SDK automatically retries transient errors (500, 502, 503, 504) with exponential backoff.
Async Client
from memgraph_sdk import AsyncMemgraphClient
async with AsyncMemgraphClient(api_key="mg_your_api_key") as mg:
await mg.remember("User prefers dark mode", user_id="alice")
result = await mg.search("preferences", user_id="alice")
Requires: pip install "memgraph-sdk[async]"
MCP Server (Claude / Cursor)
Give your AI IDE persistent memory with one command:
memgraph setup --key mg_your_api_key
Auto-detects Cursor, Claude Desktop, VS Code. Or configure manually:
{
"mcpServers": {
"memgraph": {
"command": "python3",
"args": ["-m", "memgraph_sdk.mcp"],
"env": { "MEMGRAPH_API_KEY": "mg_your_api_key" }
}
}
}
CLI
memgraph setup --key mg_your_api_key # Set up MCP for your IDE
memgraph remember "We chose PostgreSQL" # Store a memory
memgraph recall "database choice" # Search memories
memgraph status # Check connection
Configuration
Cloud (default)
mg = MemgraphClient(api_key="mg_your_key")
# Connects to https://api.memgraph.ai/v1
Self-hosted
mg = MemgraphClient(
api_key="mg_your_key",
base_url="http://your-server:8001/v1",
)
Environment variables
export MEMGRAPH_API_KEY=mg_your_key
export MEMGRAPH_API_URL=http://your-server:8001/v1 # optional
URL resolution priority:
base_urlparameter (highest)MEMGRAPH_API_URLenvironment variablehttps://api.memgraph.ai/v1(default)
Using a .env file
Create a .env file (add to .gitignore!):
# .env
MEMGRAPH_API_KEY=mg_your_key
MEMGRAPH_API_URL=https://api.memgraph.ai/v1 # or your self-hosted URL
Load it in your app:
from dotenv import load_dotenv
load_dotenv()
mg = MemgraphClient(api_key=os.environ["MEMGRAPH_API_KEY"])
Rate limits
| Tier | Requests/min | Beliefs | Entities |
|---|---|---|---|
| Free | 120 | 1,000 | 100 |
| Pro | 600 | 50,000 | 5,000 |
| Enterprise | Unlimited | Unlimited | Unlimited |
The SDK auto-retries on 429 with exponential backoff. Catch MemgraphRateLimitError for custom handling.
Input validation
The SDK validates inputs before sending requests:
- API key must start with
mg_— raisesMemgraphValidationErrorif not - user_id must be a non-empty string — raises
MemgraphValidationErrorif empty - ping() validates both connectivity AND API key authenticity
How It Works
Raw Input → Events → Episodes → Beliefs → Decisions
│ │ │ │
(short-term) (grouped) (long-term) (traced)
│
Cognitive Dreaming
(consolidation while idle)
- Events — Raw, immutable records with vector embeddings
- Episodes — Auto-grouped sequences with LLM summaries
- Beliefs — Extracted facts, preferences, decisions with confidence scores and types (fact / belief / tenet)
- Decisions — Full reasoning traces: goal → steps → tools → beliefs → outcome
- Cognitive Dreaming — Background worker that consolidates, deduplicates, and resolves contradictions
Integrations
Works with any AI framework:
| Framework | Integration | Docs |
|---|---|---|
| OpenAI Agents SDK | MemgraphAgentHooks, MemgraphRunHooks |
Docs |
| LangChain / LangGraph | Memory + Retriever | Docs |
| CrewAI | Search + Remember tools | Docs |
| Claude Code (MCP) | memgraph setup |
Docs |
| Cursor / VS Code | MCP auto-config | Docs |
Contributing
Contributions welcome. See CONTRIBUTING.md.
Security
Report vulnerabilities to security@memgraph.ai. See SECURITY.md.
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memgraph_sdk-0.8.2.tar.gz.
File metadata
- Download URL: memgraph_sdk-0.8.2.tar.gz
- Upload date:
- Size: 36.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9a1e48c2e6dd20a99182f9ec10c450ac980c101a1fd26fd160a651caade76e6c
|
|
| MD5 |
4b6d4ea48d491eb81191b04f32e8dfae
|
|
| BLAKE2b-256 |
523d809f867e05c70bd6d4b2288dc370566dd996ec640ac1c32cebcd65e21323
|
Provenance
The following attestation bundles were made for memgraph_sdk-0.8.2.tar.gz:
Publisher:
publish.yml on shubhamdev0/memgraph-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memgraph_sdk-0.8.2.tar.gz -
Subject digest:
9a1e48c2e6dd20a99182f9ec10c450ac980c101a1fd26fd160a651caade76e6c - Sigstore transparency entry: 1252406728
- Sigstore integration time:
-
Permalink:
shubhamdev0/memgraph-sdk@4183232ee6334adc3c7b90273ec1e3f7ab2950a0 -
Branch / Tag:
refs/tags/v0.8.2 - Owner: https://github.com/shubhamdev0
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4183232ee6334adc3c7b90273ec1e3f7ab2950a0 -
Trigger Event:
release
-
Statement type:
File details
Details for the file memgraph_sdk-0.8.2-py3-none-any.whl.
File metadata
- Download URL: memgraph_sdk-0.8.2-py3-none-any.whl
- Upload date:
- Size: 42.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03c9aae1220210e353dcce58866114d78eae8752b4dc6b10022e247b3cc8543c
|
|
| MD5 |
f47ee0e581288d625a9aa7ea26c7967f
|
|
| BLAKE2b-256 |
ba203c9deb23617c57ea9da40c58e8e9e9e921775a6e8b3fc265dd9d4f823e1d
|
Provenance
The following attestation bundles were made for memgraph_sdk-0.8.2-py3-none-any.whl:
Publisher:
publish.yml on shubhamdev0/memgraph-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memgraph_sdk-0.8.2-py3-none-any.whl -
Subject digest:
03c9aae1220210e353dcce58866114d78eae8752b4dc6b10022e247b3cc8543c - Sigstore transparency entry: 1252406733
- Sigstore integration time:
-
Permalink:
shubhamdev0/memgraph-sdk@4183232ee6334adc3c7b90273ec1e3f7ab2950a0 -
Branch / Tag:
refs/tags/v0.8.2 - Owner: https://github.com/shubhamdev0
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4183232ee6334adc3c7b90273ec1e3f7ab2950a0 -
Trigger Event:
release
-
Statement type: