Skip to main content

Personal memory system using Qdrant + Kuzu + Google Gemini

Project description

MEMG - Memory Management System

A lightweight, local-first memory system for developers and applications. Built with Qdrant vector database and Kuzu graph database for efficient memory storage and retrieval.

Project Status

Current Phase: Foundation Complete (Planning & Architecture Complete) Next Phase: Phase 1 - Minimum Viable Memory (Implementation)

  • Enterprise system archived
  • Architecture documented
  • Smart prompts implemented for memory extraction
  • Core database interfaces implemented (Qdrant + Kuzu)
  • Environment configuration centralized
  • Memory processing pipeline in development

What This Will Be

A personal memory system that:

  • Stores coding knowledge without cloud dependencies
  • Connects related memories automatically
  • Provides fast, relevant search with local embeddings
  • Integrates with development workflow through APIs and CLI
  • Respects privacy with local-first architecture

Architecture Overview

Target Stack (Local-First)

  • Storage: SQLite → SQLite+vectors → SQLite+vectors+Kuzu
  • Embeddings: FastEmbed (384-dim, 200MB footprint)
  • API: FastAPI + FastMCP servers
  • Deployment: Single Docker container

Current Assets

  • Smart Prompts: Context-aware memory extraction (in src/memory_system/prompts/)
  • Architecture: Complete technical specifications
  • Data Organization: Structured folders for memories and conversations

Development Roadmap

Phase 1 (2-3 weeks): Basic local memory with SQLite + text search Phase 2 (2-3 weeks): Add semantic search with FastEmbed Phase 3 (2-3 weeks): Graph relationships with Kuzu Phase 4 (1-2 weeks): Developer integration and polish

See DEVELOPMENT_ROADMAP.md for detailed implementation plan.

Use Cases

  1. Smart Development Database ⭐⭐⭐⭐⭐ - Perfect fit
  2. AI Coder Documentation ⭐⭐⭐⭐ - Leverages technical prompts
  3. Personal Memory System ⭐⭐⭐ - Original vision
  4. Note Taking ⭐⭐ - Underutilizes architecture
  5. Todo Lists ⭐ - Wrong tool for the job

Current Files

├── PERSONAL_MEMORY_SYSTEM.md # Vision and architecture
├── TECHNICAL_SPEC.md # Detailed technical specs
├── CURRENT_STATUS.md # Implementation assessment
├── DEVELOPMENT_ROADMAP.md # Phase-by-phase plan
├── src/memory_system/prompts/ # Smart memory extraction prompts
├── legacy_memory_enterprise_system.zip # Archived working system
└── [data folders] # Ready for implementation

Getting Started

Quick Start

Easy startup with the provided script:

# Clone and setup
git clone https://github.com/genovo-ai/memg.git
cd memg
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

# Start the system (recommended)
./start_memory_server.sh

Manual startup:

export KUZU_DB_PATH="$HOME/.local/share/memory_system/kuzu/memory_db"
export QDRANT_PATH="$HOME/.local/share/memory_system/qdrant"
export GOOGLE_API_KEY="your-api-key" # Optional
python -m src.memory_system.mcp_server

Verify system:

curl http://localhost:8787/

Configuration (Alternative)

Create a .env file with the required configuration:

GOOGLE_API_KEY=your_api_key_here
KUZU_DB_PATH=$HOME/.local/share/memory_system/kuzu/memory_db
QDRANT_PATH=$HOME/.local/share/memory_system/qdrant
MEMORY_SYSTEM_MCP_PORT=8787

Interface Status

  • QdrantInterface: Cloud-ready with .env configuration
  • KuzuInterface: Single database path from .env
  • All tests passing: Core database operations verified
  • No hardcoded values: Configuration centralized

Next Steps:

  1. Add embeddings service (Google AI or FastEmbed)
  2. Implement memory processing pipeline
  3. Build FastAPI endpoints

Why This Approach

The archived enterprise system (Memory + Neo4j + cloud services) works but is overkill for personal use:

  • 4GB+ RAM requirements
  • Cloud dependencies and costs
  • Complex deployment
  • Poor response quality (JSON dumps)

This rebuild prioritizes:

  • Local-first: Your data stays yours
  • Lightweight: <500MB footprint
  • Fast: Sub-second responses
  • Clean: Useful answers, not JSON vomit
  • Simple: Single container deployment

From planning to daily-use tool in 8-10 weeks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memg-0.2.0.tar.gz (152.5 kB view details)

Uploaded Source

Built Distribution

memg-0.2.0-py3-none-any.whl (59.0 kB view details)

Uploaded Python 3

File details

Details for the file memg-0.2.0.tar.gz.

File metadata

  • Download URL: memg-0.2.0.tar.gz
  • Upload date:
  • Size: 152.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for memg-0.2.0.tar.gz
Algorithm Hash digest
SHA256 d59ee2c396abffaf7cc5642618fd1eae565b9866693cd940251ee376e49b2ad0
MD5 a96fcea9d7ee52f3699f8c6335e73162
BLAKE2b-256 8d96036c175fddaa072a896283344bc5354e2fd84d23261b68136201e8901a1b

See more details on using hashes here.

File details

Details for the file memg-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: memg-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 59.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for memg-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f8d88fcf1be2f166e1d78402186f3246c8980328c0ad3985bb3a809642f8d98e
MD5 a48324fe93cff2a52071a8c7b0b46438
BLAKE2b-256 b201b7c3ce0344da7800479f5cba51fee3f3f9d6241903f364416b2cfa271d98

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page