Skip to main content

A hierarchical memory system for AI agents inspired by MemGPT

Project description

MemCore

Hierarchical Memory System for AI Agents

中文文档 | English

ArchitectureInstallationQuick StartCore WorkflowAPI Reference


A Python implementation of a three-tier memory architecture inspired by MemGPT, designed to give AI agents human-like memory capabilities.

Architecture

MemCore implements a three-tier hierarchical memory architecture, mirroring human cognitive memory systems:

MemCore Architecture
│
├── Core Memory (Working Memory)
│   ├── Limited capacity (~10 blocks, ~2000 tokens)
│   ├── Always available in context
│   ├── Stores essential user info, preferences
│   └── Priority-based eviction
│
├── Recall Memory (Episodic Memory)
│   ├── Conversation history
│   ├── Chronologically ordered messages
│   ├── Searchable by content
│   └── Export to OpenAI format
│
└── Archival Memory (Semantic Memory)
    ├── Unlimited long-term storage
    ├── Explicit search required
    ├── Metadata support
    └── Pluggable storage backends

Memory Types Comparison

Feature Core Memory Recall Memory Archival Memory
Capacity Limited Moderate Unlimited
Access Always visible Search/Recent Explicit search
Content Essential context Conversations Long-term facts
Human Analogy Working memory Episodic memory Semantic memory

Core Workflow

┌────────────────────────────────────────────────────────────────┐
│                    Memory Management Flow                       │
├────────────────────────────────────────────────────────────────┤
│                                                                 │
│  User Input ──▶ MemoryManager                                   │
│                      │                                          │
│         ┌────────────┼────────────┐                            │
│         ▼            ▼            ▼                            │
│   ┌──────────┐ ┌──────────┐ ┌──────────┐                      │
│   │  Core    │ │  Recall  │ │ Archival │                      │
│   │  Memory  │ │  Memory  │ │  Memory  │                      │
│   └──────────┘ └──────────┘ └──────────┘                      │
│         │            │            │                            │
│         └────────────┴────────────┘                            │
│                      │                                          │
│                      ▼                                          │
│         ┌──────────────────────┐                               │
│         │ get_context_for_prompt│                               │
│         │   (Build LLM Context) │                               │
│         └──────────────────────┘                               │
│                      │                                          │
│                      ▼                                          │
│              LLM Response                                       │
│                                                                 │
└────────────────────────────────────────────────────────────────┘

Memory Flow Example

# 1. Initialize memory
manager = MemoryManager()

# 2. Store essential info in Core Memory
manager.add_core_memory("user", "Name: Alice, Developer", priority=2)
manager.add_core_memory("preferences", "Prefers concise responses", priority=1)

# 3. Track conversation in Recall Memory
manager.user_message("Help me write a Python function")
manager.assistant_message("Sure! What should the function do?")

# 4. Archive important facts for later
manager.archive("Alice is working on a Python project", {"topic": "project"})

# 5. Build context for LLM
context = manager.get_context_for_prompt(include_recall=5)
# This combines:
# - Core memory (always included)
# - Recent 5 messages from recall
# - Does NOT include archival (requires explicit search)

Installation

pip install memcore

For vector search support:

pip install memcore[vector]

Quick Start

from memcore import MemoryManager, MessageType

# Initialize memory manager
manager = MemoryManager()

# Add to core memory (always available)
manager.add_core_memory("user_profile", "Name: Alice, Role: Developer")

# Add messages to recall memory
manager.user_message("What's the weather today?")
manager.assistant_message("I don't have access to real-time weather data.")

# Archive important information
manager.archive("User prefers concise responses", {"source": "preference"})

# Get context for LLM prompt
context = manager.get_context_for_prompt()
print(context)

Core Memory Operations

from memcore import CoreMemory

core = CoreMemory(max_blocks=10)

# Add blocks
block_id = core.add("preferences", "Dark mode enabled", priority=1)

# Find by label
block = core.find_by_label("preferences")

# Update content
core.update(block_id, "Dark mode and large fonts enabled")

# Format for prompt
print(core.to_prompt())

Recall Memory Operations

from memcore import RecallMemory, MessageType

recall = RecallMemory()

# Add messages
recall.add(MessageType.USER, "Hello!")
recall.add(MessageType.ASSISTANT, "Hi there! How can I help?")

# Get recent messages
recent = recall.get_recent(5)

# Search messages
results = recall.search("hello")

# Export to OpenAI format
messages = recall.to_openai_messages()

Archival Memory Operations

from memcore import ArchivalMemory

archive = ArchivalMemory()

# Store information
entry_id = archive.add("Important fact about user preferences")

# Search (simple text matching by default)
results = archive.search("preferences")

# Count entries
print(f"Archived items: {archive.count()}")

Persistence

# Save to file
manager.save("memory_state.json")

# Load from file
manager = MemoryManager.load("memory_state.json")

CLI Usage

# Initialize
memcore init

# Core memory
memcore core add --label "user" --content "Alice"
memcore core list

# Recall memory
memcore recall add --role user --content "Hello"
memcore recall list
memcore recall search --query "hello"

# Archival memory
memcore archive add --content "Important fact"
memcore archive search --query "fact"

# Status
memcore status

API Reference

MemoryManager

Method Description
add_core_memory(label, content, priority) Add to core memory
user_message(content) Add user message
assistant_message(content) Add assistant message
system_message(content) Add system message
archive(content, metadata) Store in archival memory
get_context_for_prompt(limit) Build context string
save(filepath) Save to file
load(filepath) Load from file

Custom Storage Backends

from memcore.storage import MemoryStorage
from memcore.core.types import MemoryEntry

class MyCustomStorage(MemoryStorage):
    def save(self, entry: MemoryEntry) -> str:
        # Implement custom save logic
        pass

    def load(self, entry_id: str):
        # Implement custom load logic
        pass

    # ... implement other methods

Academic Reference

This framework implements the memory architecture inspired by MemGPT:

MemGPT: Towards LLMs as Operating Systems

Charles Packer, Vivian Fang, Shishir G. Patil, Kevin Lin, Sarah Wooders, Joseph E. Gonzalez

arXiv 2023

Paper: https://arxiv.org/abs/2310.08560

@article{packer2023memgpt,
  title={MemGPT: Towards LLMs as Operating Systems},
  author={Packer, Charles and Fang, Vivian and Patil, Shishir G and Lin, Kevin and Wooders, Sarah and Gonzalez, Joseph E},
  journal={arXiv preprint arXiv:2310.08560},
  year={2023}
}

License

This project is licensed under the MIT License.

MIT License

Copyright (c) 2024 AI Agent Research Team

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

Made with ❤️ by AI Agent Research Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memcore_ai-0.1.0.tar.gz (16.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memcore_ai-0.1.0-py3-none-any.whl (13.4 kB view details)

Uploaded Python 3

File details

Details for the file memcore_ai-0.1.0.tar.gz.

File metadata

  • Download URL: memcore_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 16.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for memcore_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c81b44b6e5189e152104ecda7bc16a5c8bb8d338597bf6637fc4d76580075b8c
MD5 cdf1ff08c5bd8d94e6e65499e7c5562c
BLAKE2b-256 704465292884779755251993bbdcb7f0f27a43242cc5fbb6d1e11c87ae0a61bf

See more details on using hashes here.

File details

Details for the file memcore_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: memcore_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for memcore_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 206ea24758fe52daec552690d9de1ef1fdb569224ec49d42ca45ba06e6678a78
MD5 f97cb495b5d636a7e202cad96c2b2f45
BLAKE2b-256 95c90875c204f1d941575b9c7958ad70a4f1a46f744b5db37560b8c475f4931c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page