Skip to main content

Quantum-optimized knowledge graph memory for AI agents. Relationship-aware subgraph selection via QAOA.

Project description

Quantum Memory Graph โš›๏ธ๐Ÿง 

Relationship-aware memory for AI agents. Knowledge graphs + quantum-optimized subgraph selection.

Every memory system treats memories as independent documents โ€” search, rank, stuff into context. But memories aren't independent. They have relationships. "The team chose React" becomes 10x more useful paired with "because of ecosystem maturity" and "FastAPI handles the backend."

Quantum Memory Graph maps these relationships, then uses QAOA to find the optimal combination of memories โ€” not just the most relevant individuals, but the best connected subgraph that gives your agent maximum context.

Benchmark: MemCombine

We created MemCombine to test what no existing benchmark measures โ€” memory combination quality.

Method Coverage Evidence Recall F1 Perfect
Embedding Top-K 69.9% 65.6% 68.1% 1/5
Graph + QAOA 96.7% 91.0% 92.6% 4/5
Advantage +26.8% +25.4% +24.5%

When the task is "find memories that work together," graph-aware quantum selection crushes pure similarity search.

Install

pip install quantum-memory-graph

Quick Start

from quantum_memory_graph import store, recall

# Store memories โ€” automatically builds knowledge graph
store("Project Alpha uses React frontend with TypeScript.")
store("Project Alpha backend is FastAPI with PostgreSQL.")
store("FastAPI connects to PostgreSQL via SQLAlchemy ORM.")
store("React components use Material UI for styling.")
store("Team had pizza for lunch. Pepperoni was great.")

# Recall โ€” graph traversal + QAOA finds the optimal combination
result = recall("What is Project Alpha's full tech stack?", K=4)

for memory in result["memories"]:
    print(f"  {memory['text']}")
    print(f"    Connected to {len(memory['connections'])} other selected memories")

Output: Returns React, FastAPI, PostgreSQL, and SQLAlchemy memories โ€” connected, complete, no noise. The pizza memory is excluded because it has no graph connections to the tech stack cluster.

How It Works

Query: "What's the tech stack?"
        โ”‚
        โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  1. Graph Search     โ”‚  Embedding similarity + multi-hop traversal
โ”‚     Find neighbors   โ”‚  Discovers memories connected to relevant ones
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ 14 candidates
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  2. Subgraph Data    โ”‚  Extract adjacency matrix + relevance scores
โ”‚     Build problem    โ”‚  Encode relationships as optimization weights
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ NP-hard selection
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  3. QAOA Optimize    โ”‚  Quantum approximate optimization
โ”‚     Find best K      โ”‚  Maximizes: relevance + connectivity + coverage
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ K memories
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  4. Return with      โ”‚  Each memory includes its connections
โ”‚     relationships    โ”‚  to other selected memories
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Why Quantum?

Optimal subgraph selection is NP-hard. Given N candidate memories, finding the best K that maximize relevance, connectivity, AND coverage has exponential classical complexity. QAOA provides polynomial-time approximate solutions that beat greedy heuristics โ€” this is the one problem where quantum computing has a genuine algorithmic advantage over classical approaches.

Architecture

Three Layers

  1. Knowledge Graph (graph.py) โ€” Memories are nodes. Relationships are weighted edges based on:

    • Semantic similarity (embedding cosine distance)
    • Entity co-occurrence (shared people, projects, concepts)
    • Temporal proximity (memories close in time)
    • Source proximity (same conversation/document)
  2. Subgraph Optimizer (subgraph_optimizer.py) โ€” QAOA circuit that maximizes:

    • ฮฑ ร— relevance (individual memory scores)
    • ฮฒ ร— connectivity (edge weights within selected subgraph)
    • ฮณ ร— coverage (topic diversity across selection)
  3. Pipeline (pipeline.py) โ€” Unified store() and recall() interface.

Optional: MemPalace Integration

Use MemPalace (MIT, by @bensig) as the storage/retrieval backend for 96.6% base retrieval quality:

from quantum_memory_graph.mempalace_bridge import store_memory, recall_memories

# MemPalace stores verbatim โ†’ ChromaDB retrieves candidates โ†’ QAOA selects optimal subgraph
result = recall_memories("What happened in the meeting?", K=5, use_qaoa=True)

API Server

pip install quantum-memory-graph[api]
python -m quantum_memory_graph.api

Endpoints:

  • POST /store โ€” Store a memory
  • POST /recall โ€” Graph + QAOA recall
  • POST /store-batch โ€” Batch store
  • GET /stats โ€” Graph statistics
  • GET / โ€” Health check

Advanced Usage

Custom Graph

from quantum_memory_graph import MemoryGraph, recall
from quantum_memory_graph.pipeline import set_graph

# Tune similarity threshold for edge creation
graph = MemoryGraph(similarity_threshold=0.25)
set_graph(graph)

# Store and recall as normal

Tune QAOA Parameters

result = recall(
    "query",
    K=5,
    alpha=0.4,       # Relevance weight
    beta_conn=0.35,   # Connectivity weight  
    gamma_cov=0.25,   # Coverage/diversity weight
    hops=3,           # Graph traversal depth
    top_seeds=7,      # Initial seed nodes
    max_candidates=14, # Max qubits for QAOA
)

Run MemCombine Benchmark

from benchmarks.memcombine import run_benchmark

def my_recall(memories, query, K):
    # Your recall implementation
    return selected_indices  # List[int]

results = run_benchmark(my_recall, K=5)
print(f"Coverage: {results['avg_coverage']*100:.1f}%")

IBM Quantum Hardware

For production workloads, run QAOA on real quantum hardware:

pip install quantum-memory-graph[ibm]
export IBM_QUANTUM_TOKEN=your_token

Validated on ibm_fez and ibm_kingston backends.

Requirements

  • Python โ‰ฅ 3.9
  • sentence-transformers
  • networkx
  • qiskit + qiskit-aer
  • numpy

License

MIT License โ€” Copyright 2026 Coinkong (Chef's Attraction)

Built with MemPalace by @bensig (MIT License). See THIRD-PARTY-LICENSES.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quantum_memory_graph-0.1.0.tar.gz (24.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quantum_memory_graph-0.1.0-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file quantum_memory_graph-0.1.0.tar.gz.

File metadata

  • Download URL: quantum_memory_graph-0.1.0.tar.gz
  • Upload date:
  • Size: 24.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for quantum_memory_graph-0.1.0.tar.gz
Algorithm Hash digest
SHA256 849b096d7554a75fd99be259e12e483ef77a436c0fd6b8b8575c9d4fc6529f91
MD5 38d693bd15a5ee0236b51f2ffc836adc
BLAKE2b-256 1eecf91509e66c1b93e0f9b96668558ceb85a91259379621643303c8fb81c95c

See more details on using hashes here.

File details

Details for the file quantum_memory_graph-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for quantum_memory_graph-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e5e112ecf22e5d764df6b370915ba0f05567febcbc43acfb64652163a3d3bd31
MD5 1e2c3a893ae202fa8fb0f24e13e74554
BLAKE2b-256 fb0723e5f90a2ac97c3720571c8539bbe02f058b873a5704e8656a7ef39f96fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page