Skip to main content

Persistent Memory Management for Large Language Models

Project description

MemoryLLM

The Persistent Memory Problem for Large Language Models

PyPI version Python 3.8+ License: MIT

THIS PACKAGE IS A PLACEHOLDER FOR A WORK IN PROGRESS. DO NOT PAY TOO MUCH ATTENTION FOR NOW.

Overview

MemoryLLM is a Python library designed to solve one of the most significant limitations of Large Language Models: the lack of persistent memory across conversations and over time. While LLMs excel at understanding and generating text within a single conversation, they typically lose all context once the session ends, forcing users to start from scratch each time.

The Problem

Large Language Models face several memory-related challenges:

  • Session Isolation: Each new conversation starts with zero context
  • Context Window Limitations: Long conversations hit token limits, losing early context
  • No Learning Persistence: Insights and preferences from previous interactions are lost
  • Inefficient Repetition: Users must re-explain context, preferences, and background information
  • Lack of Continuity: No ability to build upon previous conversations or maintain ongoing projects

The Solution

MemoryLLM provides a comprehensive memory layer for LLM applications, enabling:

๐Ÿง  Persistent Context Storage

  • Store and retrieve conversation history across sessions
  • Maintain user preferences, insights, and learned patterns
  • Preserve project context and ongoing work

๐Ÿ” Intelligent Memory Retrieval

  • Semantic search through historical conversations
  • Context-aware memory selection based on current topics
  • Automatic relevance scoring and filtering

๐Ÿ”— Seamless Integration

  • Framework-agnostic design works with any LLM provider
  • Simple API that integrates with existing applications
  • Minimal code changes required for existing projects

๐Ÿ“Š Memory Management

  • Configurable memory retention policies
  • Automatic memory compression and summarization
  • Privacy controls and data lifecycle management

Key Features

  • Multi-Modal Memory: Store text, code, documents, and structured data
  • Vector-Based Search: Semantic similarity search for contextual retrieval
  • Memory Hierarchies: Organize memories by importance, recency, and relevance
  • Privacy-First: Local storage options with encryption support
  • Scalable Architecture: From simple file storage to enterprise databases
  • Memory Analytics: Insights into memory usage and effectiveness

Quick Start

from memoryllm import MemoryManager, ConversationMemory

# Initialize memory manager
memory = MemoryManager(storage_path="./memories")

# Store conversation context
memory.store_conversation(
    conversation_id="project_alpha",
    messages=[...],
    metadata={"project": "alpha", "user": "developer"}
)

# Retrieve relevant context for new conversation
relevant_context = memory.retrieve_context(
    query="How should I implement the authentication system?",
    conversation_id="project_alpha",
    max_results=5
)

# Continue conversation with persistent memory
llm_response = your_llm.chat(
    messages=relevant_context + new_messages
)

Use Cases

๐Ÿค– AI Assistants

  • Maintain user preferences and communication styles
  • Remember ongoing projects and their status
  • Build upon previous problem-solving sessions

๐Ÿ’ป Code Development

  • Preserve codebase context and architectural decisions
  • Remember debugging sessions and solutions
  • Maintain coding standards and patterns

๐Ÿ“š Knowledge Management

  • Store and retrieve research findings
  • Build cumulative understanding of complex topics
  • Connect related concepts across conversations

๐ŸŽฏ Personalized Applications

  • Learn user behavior and preferences
  • Adapt responses based on historical interactions
  • Provide consistent experience across sessions

Architecture

MemoryLLM is built with modularity and flexibility in mind:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Application   โ”‚    โ”‚   MemoryLLM     โ”‚    โ”‚    Storage      โ”‚
โ”‚                 โ”‚โ—„โ”€โ”€โ–บโ”‚                 โ”‚โ—„โ”€โ”€โ–บโ”‚                 โ”‚
โ”‚  Your LLM App   โ”‚    โ”‚ Memory Manager  โ”‚    โ”‚ Vector DB/Files โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Storage Backends

  • Local Files: Simple JSON/pickle storage for development
  • SQLite: Structured storage with SQL queries
  • Vector Databases: Chroma, Pinecone, Weaviate support
  • Cloud Storage: S3, GCS, Azure Blob integration

Memory Types

  • Episodic Memory: Specific conversation episodes
  • Semantic Memory: Extracted knowledge and concepts
  • Procedural Memory: Learned processes and workflows
  • Meta Memory: Memory about memory usage patterns

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Laurent-Philippe Albou
June 5th, 2025


MemoryLLM: Because every conversation should build upon the last one.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoryllm-0.1.1.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memoryllm-0.1.1-py3-none-any.whl (5.2 kB view details)

Uploaded Python 3

File details

Details for the file memoryllm-0.1.1.tar.gz.

File metadata

  • Download URL: memoryllm-0.1.1.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for memoryllm-0.1.1.tar.gz
Algorithm Hash digest
SHA256 36b37bc1604e07cb267a709023895f6708f23a937292bd0e8daf42880758d8bd
MD5 82796bf35411c5af7a989eb9d50cf8db
BLAKE2b-256 626f3cdf1535a7faea3cb542f637f3488162f59e951cc69c68318163c55315c9

See more details on using hashes here.

File details

Details for the file memoryllm-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: memoryllm-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 5.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for memoryllm-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 324d359a6610669c30e5ce35b27faa336d9d455759db264b952cd4dce53ffbd2
MD5 564f9fe88445d185958865174a63305e
BLAKE2b-256 6ce0519ed443e2bb5c98f4cdaaa4a6eed862d1cf9d883eced12f213a3058cd6e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page