Skip to main content

Persistent Memory Management for Large Language Models

Project description

MemoryLLM

The Persistent Memory Problem for Large Language Models

PyPI version Python 3.8+ License: MIT

THIS PACKAGE IS A PLACEHOLDER FOR A WORK IN PROGRESS. DO NOT PAY TOO MUCH ATTENTION FOR NOW.

Overview

MemoryLLM is a Python library designed to solve one of the most significant limitations of Large Language Models: the lack of persistent memory across conversations and over time. While LLMs excel at understanding and generating text within a single conversation, they typically lose all context once the session ends, forcing users to start from scratch each time.

The Problem

Large Language Models face several memory-related challenges:

  • Session Isolation: Each new conversation starts with zero context
  • Context Window Limitations: Long conversations hit token limits, losing early context
  • No Learning Persistence: Insights and preferences from previous interactions are lost
  • Inefficient Repetition: Users must re-explain context, preferences, and background information
  • Lack of Continuity: No ability to build upon previous conversations or maintain ongoing projects

The Solution

MemoryLLM provides a comprehensive memory layer for LLM applications, enabling:

๐Ÿง  Persistent Context Storage

  • Store and retrieve conversation history across sessions
  • Maintain user preferences, insights, and learned patterns
  • Preserve project context and ongoing work

๐Ÿ” Intelligent Memory Retrieval

  • Semantic search through historical conversations
  • Context-aware memory selection based on current topics
  • Automatic relevance scoring and filtering

๐Ÿ”— Seamless Integration

  • Framework-agnostic design works with any LLM provider
  • Simple API that integrates with existing applications
  • Minimal code changes required for existing projects

๐Ÿ“Š Memory Management

  • Configurable memory retention policies
  • Automatic memory compression and summarization
  • Privacy controls and data lifecycle management

Key Features

  • Multi-Modal Memory: Store text, code, documents, and structured data
  • Vector-Based Search: Semantic similarity search for contextual retrieval
  • Memory Hierarchies: Organize memories by importance, recency, and relevance
  • Privacy-First: Local storage options with encryption support
  • Scalable Architecture: From simple file storage to enterprise databases
  • Memory Analytics: Insights into memory usage and effectiveness

Quick Start

from memoryllm import MemoryManager, ConversationMemory

# Initialize memory manager
memory = MemoryManager(storage_path="./memories")

# Store conversation context
memory.store_conversation(
    conversation_id="project_alpha",
    messages=[...],
    metadata={"project": "alpha", "user": "developer"}
)

# Retrieve relevant context for new conversation
relevant_context = memory.retrieve_context(
    query="How should I implement the authentication system?",
    conversation_id="project_alpha",
    max_results=5
)

# Continue conversation with persistent memory
llm_response = your_llm.chat(
    messages=relevant_context + new_messages
)

Use Cases

๐Ÿค– AI Assistants

  • Maintain user preferences and communication styles
  • Remember ongoing projects and their status
  • Build upon previous problem-solving sessions

๐Ÿ’ป Code Development

  • Preserve codebase context and architectural decisions
  • Remember debugging sessions and solutions
  • Maintain coding standards and patterns

๐Ÿ“š Knowledge Management

  • Store and retrieve research findings
  • Build cumulative understanding of complex topics
  • Connect related concepts across conversations

๐ŸŽฏ Personalized Applications

  • Learn user behavior and preferences
  • Adapt responses based on historical interactions
  • Provide consistent experience across sessions

Architecture

MemoryLLM is built with modularity and flexibility in mind:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Application   โ”‚    โ”‚   MemoryLLM     โ”‚    โ”‚    Storage      โ”‚
โ”‚                 โ”‚โ—„โ”€โ”€โ–บโ”‚                 โ”‚โ—„โ”€โ”€โ–บโ”‚                 โ”‚
โ”‚  Your LLM App   โ”‚    โ”‚ Memory Manager  โ”‚    โ”‚ Vector DB/Files โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Storage Backends

  • Local Files: Simple JSON/pickle storage for development
  • SQLite: Structured storage with SQL queries
  • Vector Databases: Chroma, Pinecone, Weaviate support
  • Cloud Storage: S3, GCS, Azure Blob integration

Memory Types

  • Episodic Memory: Specific conversation episodes
  • Semantic Memory: Extracted knowledge and concepts
  • Procedural Memory: Learned processes and workflows
  • Meta Memory: Memory about memory usage patterns

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Laurent-Philippe Albou
June 5th, 2025


MemoryLLM: Because every conversation should build upon the last one.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoryllm-0.1.0.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memoryllm-0.1.0-py3-none-any.whl (5.2 kB view details)

Uploaded Python 3

File details

Details for the file memoryllm-0.1.0.tar.gz.

File metadata

  • Download URL: memoryllm-0.1.0.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for memoryllm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 99f8c6f6b79fec71c811f728a4717e78df214449e5258909d38f3e7866cb6901
MD5 be732979bf8963ad01bbcf3e1853ea5a
BLAKE2b-256 c01c1c325a5ad20a686bf87b8f5f8c5300cbc6cb8c2e9ee654746929387702a9

See more details on using hashes here.

File details

Details for the file memoryllm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: memoryllm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 5.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for memoryllm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cade9807518a81ea24826f39661bd696c1c30bc41ac891836cb9a880c26a02e1
MD5 d371a3a9de848048a94579a0a6ca5525
BLAKE2b-256 cc34364591262e3018e47d6c97a61cbfbcc74f88bcd32e204b44ae7c44c78e8e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page