Skip to main content

Memory that helps AI agents learn from their mistakes. Store beliefs, search with semantic similarity, track decision traces.

Project description

Memgraph AI

Memory that helps AI agents learn from their mistakes

PyPI Downloads Python CI License

Website · Examples · Issues


Not a vector store with a wrapper. A three-layer cognitive engine that distills raw events into episodes, crystallizes them into beliefs, and tracks how those beliefs evolve — with background consolidation that improves memory while your agents sleep.

Table of Contents

Installation

pip install memgraph-sdk

With optional extras:

pip install "memgraph-sdk[async]"   # Async client (httpx)
pip install "memgraph-sdk[mcp]"     # MCP server for Claude/Cursor
pip install "memgraph-sdk[all]"     # Everything

Give your agent a memory in 30 seconds

from memgraph_sdk import MemgraphClient

mg = MemgraphClient(api_key="mg_your_api_key")

# Store a memory (immediately searchable)
mg.remember("Customer prefers dark mode and uses PyTorch", user_id="alice")

# Search memories
result = mg.search("What does Alice prefer?", user_id="alice")
print(result["results"][0]["content"])
# → "Customer prefers dark mode and uses PyTorch" (score: 0.78)

# Get all beliefs for a user
beliefs = mg.get_beliefs(user_id="alice")

Three lines to set up. Store and search immediately. The tenant_id is resolved automatically from your API key.

Authentication

Set your API key as an environment variable so you never have to worry about committing credentials:

export MEMGRAPH_API_KEY=mg_your_api_key
import os
from memgraph_sdk import MemgraphClient

mg = MemgraphClient(api_key=os.environ["MEMGRAPH_API_KEY"])

Or pass it directly (not recommended for production):

mg = MemgraphClient(api_key="mg_your_api_key")

Get your API key at memgraph.ai.

Core Methods

# Store (immediately searchable with vector embedding)
mg.remember("User prefers dark mode", user_id="alice", category="preference")

# Store (async pipeline — extraction + consolidation)
mg.add("Full conversation text here", user_id="alice")

# Search (returns scored results with semantic similarity)
result = mg.search("UI preferences", user_id="alice")
# → {"results": [{"content": "...", "score": 0.76, "metadata": {...}}], "total": 1}

# Get all beliefs for a user
beliefs = mg.get_beliefs(user_id="alice", limit=50)

# Health check
status = mg.ping()

Context manager

with MemgraphClient(api_key="mg_your_key") as mg:
    mg.remember("User likes Python", user_id="alice")
    # Session is automatically closed when block exits

Error Handling

from memgraph_sdk import MemgraphClient
from memgraph_sdk.exceptions import (
    MemgraphAuthError,
    MemgraphConnectionError,
    MemgraphRateLimitError,
    MemgraphValidationError,
    MemgraphAPIError,
)

mg = MemgraphClient(api_key="mg_your_key")

try:
    result = mg.search("query", user_id="alice")
except MemgraphAuthError:
    # Invalid API key (401/403)
    print("Check your MEMGRAPH_API_KEY")
except MemgraphRateLimitError as e:
    # Too many requests (429) — retry after e.retry_after seconds
    print(f"Rate limited. Retry in {e.retry_after}s")
except MemgraphConnectionError:
    # Server unreachable or timeout
    print("Cannot reach Memgraph server")
except MemgraphValidationError as e:
    # Bad request (422) — check your parameters
    print(f"Validation error: {e}")
except MemgraphAPIError as e:
    # Server error (5xx) — transient, retried automatically
    print(f"Server error {e.status_code}: {e}")

The SDK automatically retries transient errors (500, 502, 503, 504) with exponential backoff. Auth and validation errors are raised immediately.

Async Client

from memgraph_sdk import AsyncMemgraphClient

async with AsyncMemgraphClient(api_key="mg_your_api_key") as mg:
    mg.remember("User prefers dark mode", user_id="alice")
    result = await mg.search("preferences", user_id="alice")

Requires: pip install "memgraph-sdk[async]"

MCP Server (Claude / Cursor)

Give your AI IDE persistent memory with one command:

memgraph setup --key mg_your_api_key

Auto-detects Cursor, Claude Desktop, VS Code. Or configure manually:

{
  "mcpServers": {
    "memgraph": {
      "command": "python3",
      "args": ["-m", "memgraph_sdk.mcp"],
      "env": { "MEMGRAPH_API_KEY": "mg_your_api_key" }
    }
  }
}

CLI

memgraph setup --key mg_your_api_key    # Set up MCP for your IDE
memgraph remember "We chose PostgreSQL"  # Store a memory
memgraph recall "database choice"        # Search memories
memgraph status                          # Check connection

Configuration

Cloud (default)

mg = MemgraphClient(api_key="mg_your_key")
# Connects to https://api.memgraph.ai/v1

Self-hosted

mg = MemgraphClient(
    api_key="mg_your_key",
    base_url="http://your-server:8001/v1",
)

Environment variables

export MEMGRAPH_API_KEY=mg_your_key
export MEMGRAPH_API_URL=http://your-server:8001/v1  # optional

URL resolution priority:

  1. base_url parameter (highest)
  2. MEMGRAPH_API_URL environment variable
  3. https://api.memgraph.ai/v1 (default)

How It Works

Raw Input → Events → Episodes → Beliefs
              │          │          │
          (short-term) (grouped)  (long-term)
                                     │
                              Cognitive Dreaming
                         (consolidation while idle)
  • Events — Raw, immutable records with vector embeddings
  • Episodes — Auto-grouped sequences with LLM summaries
  • Beliefs — Extracted facts, preferences, decisions with confidence scores and types (fact / belief / tenet)
  • Cognitive Dreaming — Background worker that consolidates, deduplicates, and resolves contradictions

Integrations

Works with any AI framework. See examples/ for runnable code.

Framework What's included Status
OpenAI Function calling agent with memory ✅ Tested
LangChain Memory, Retriever, Toolkit ✅ Tested
CrewAI Search and Remember tools ✅ Tested
LlamaIndex Retriever and ToolSpec

Examples

All examples tested against the production API (api.memgraph.ai):

Example Description
quick_start.py Store, search, update — takes 2 minutes
agent.py Interactive chat agent with OpenAI + memory
sdk_demo.py Core SDK operations in 30 lines

Contributing

Contributions welcome. See CONTRIBUTING.md.

Security

Report vulnerabilities to security@memgraph.ai. See SECURITY.md.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memgraph_sdk-0.7.1.tar.gz (31.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memgraph_sdk-0.7.1-py3-none-any.whl (38.1 kB view details)

Uploaded Python 3

File details

Details for the file memgraph_sdk-0.7.1.tar.gz.

File metadata

  • Download URL: memgraph_sdk-0.7.1.tar.gz
  • Upload date:
  • Size: 31.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memgraph_sdk-0.7.1.tar.gz
Algorithm Hash digest
SHA256 90fa042b901aca684494e83999afdfe141cc14e6be220da234af96b6f90e416a
MD5 a8be2db750ffa35311427c6efc4c2341
BLAKE2b-256 ee173bbdf7e218a174815ed1971839536e0117294946e499b34a2579eb630c58

See more details on using hashes here.

Provenance

The following attestation bundles were made for memgraph_sdk-0.7.1.tar.gz:

Publisher: publish.yml on shubhamdev0/memgraph-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memgraph_sdk-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: memgraph_sdk-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 38.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memgraph_sdk-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9a6fafd9a8e5a39bd39a2d3236d5a5c39fe06d5f03078c1158c68261be1ec42f
MD5 61520eb42bedf9e72e8ab939f6a643c4
BLAKE2b-256 69d23422ac0f17ab52501f001f3790668eb1584b0419d4061f881708bd633ef6

See more details on using hashes here.

Provenance

The following attestation bundles were made for memgraph_sdk-0.7.1-py3-none-any.whl:

Publisher: publish.yml on shubhamdev0/memgraph-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page