Intelligent memory system for LLM agents with two-tier architecture - temporal knowledge graph memory for sophisticated AI agents
Project description
AbstractMemory
Intelligent memory system for LLM agents with two-tier architecture
AbstractMemory provides efficient, purpose-built memory solutions for different types of LLM agents - from simple task-specific tools to sophisticated autonomous agents with persistent, grounded memory.
๐ฏ Project Goals
AbstractMemory is part of the AbstractLLM ecosystem refactoring, designed to power both simple and complex AI agents:
- Simple agents (ReAct, task tools) get lightweight, efficient memory
- Autonomous agents get sophisticated temporal memory with user tracking
- No over-engineering - memory complexity matches agent purpose
๐๏ธ Architecture Overview
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AbstractLLM Ecosystem โ
โโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ AbstractCore โ AbstractMemory โ AbstractAgent โ
โ โ โ โ
โ โข LLM Providers โ โข Simple Memory โ โข ReAct Agents โ
โ โข Sessions โ โข Complex Memoryโ โข Autonomous Agents โ
โ โข Tools โ โข Temporal KG โ โข Multi-user Agents โ
โโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ง Two-Tier Memory Strategy
Tier 1: Simple Memory (Task Agents)
Perfect for focused, single-purpose agents:
from abstractmemory import create_memory
# ReAct agent memory
scratchpad = create_memory("scratchpad", max_entries=50)
scratchpad.add_thought("User wants to learn Python")
scratchpad.add_action("search", {"query": "Python tutorials"})
scratchpad.add_observation("Found great tutorials")
# Simple chatbot memory
buffer = create_memory("buffer", max_messages=100)
buffer.add_message("user", "Hello!")
buffer.add_message("assistant", "Hi there!")
Tier 2: Complex Memory (Autonomous Agents)
For sophisticated agents with persistence and learning:
# Autonomous agent with full memory capabilities
memory = create_memory("grounded", working_capacity=10, enable_kg=True)
# Multi-user context
memory.set_current_user("alice", relationship="owner")
memory.add_interaction("I love Python", "Python is excellent!")
memory.learn_about_user("Python developer")
# Get personalized context
context = memory.get_full_context("programming", user_id="alice")
๐ง Quick Start
Installation
pip install abstractmemory
# For real LLM integration tests
pip install abstractmemory[llm]
# For LanceDB storage (optional)
pip install lancedb
Basic Usage
from abstractmemory import create_memory
# 1. Choose memory type based on agent purpose
memory = create_memory("scratchpad") # Simple task agent
memory = create_memory("buffer") # Simple chatbot
memory = create_memory("grounded") # Autonomous agent
# 2. Use memory in your agent
if agent_type == "react":
memory.add_thought("Planning the solution...")
memory.add_action("execute", {"command": "analyze"})
memory.add_observation("Analysis complete")
elif agent_type == "autonomous":
memory.set_current_user("user123")
memory.add_interaction(user_input, agent_response)
context = memory.get_full_context(query)
๐๏ธ Persistent Storage Options
AbstractMemory now supports sophisticated storage for observable, searchable AI memory:
Observable Markdown Storage
Perfect for development, debugging, and transparency:
# Human-readable, version-controllable AI memory
memory = create_memory(
"grounded",
storage_backend="markdown",
storage_path="./memory"
)
# Generates organized structure:
# memory/
# โโโ verbatim/alice/2025/09/24/10-30-45_python_int_abc123.md
# โโโ experiential/2025/09/24/10-31-02_learning_note_def456.md
# โโโ links/2025/09/24/int_abc123_to_note_def456.json
# โโโ index.json
Powerful Vector Search
High-performance search with AbstractCore embeddings:
from abstractllm import create_llm
# Create provider with embedding support
provider = create_llm("openai", embedding_model="text-embedding-3-small")
# Vector search storage
memory = create_memory(
"grounded",
storage_backend="lancedb",
storage_uri="./memory.db",
embedding_provider=provider
)
# Semantic search across stored interactions
results = memory.search_stored_interactions("machine learning concepts")
Dual Storage - Best of Both Worlds
Complete observability with powerful search:
# Dual storage: markdown (observable) + LanceDB (searchable)
memory = create_memory(
"grounded",
storage_backend="dual",
storage_path="./memory",
storage_uri="./memory.db",
embedding_provider=provider
)
# Every interaction stored in both formats
# - Markdown files for complete transparency
# - Vector database for semantic search
๐ Documentation
- Architecture Guide - Complete system design
- Memory Types - Detailed component guide
- Storage Systems - Persistent storage with dual backends
- Usage Patterns - Real-world examples
- API Reference - Complete API documentation
- Integration Guide - AbstractLLM ecosystem integration
- AbstractCore Embedding Specs - Embedding integration requirements
๐ฌ Key Features
โ Purpose-Built Memory Types
- ScratchpadMemory: ReAct thought-action-observation cycles
- BufferMemory: Simple conversation history
- GroundedMemory: Multi-dimensional temporal memory
โ State-of-the-Art Research Integration
- MemGPT/Letta Pattern: Self-editing core memory
- Temporal Grounding: WHO (relational) + WHEN (temporal) context
- Zep/Graphiti Architecture: Bi-temporal knowledge graphs
โ Four-Tier Memory Architecture (Autonomous Agents)
Core Memory โโโ Semantic Memory โโโ Working Memory โโโ Episodic Memory
(Identity) (Validated Facts) (Recent Context) (Event Archive)
โ Learning Capabilities
- Failure/Success Tracking: Learn from experience
- User Personalization: Multi-user context separation
- Fact Validation: Confidence-based knowledge consolidation
โ Dual Storage Architecture
- ๐ Markdown Storage: Human-readable, observable AI memory evolution
- ๐ LanceDB Storage: Vector search with SQL capabilities via AbstractCore
- ๐ Dual Mode: Best of both worlds - transparency + powerful search
- ๐ง AI Reflections: Automatic experiential notes about interactions
- ๐ Bidirectional Links: Connect interactions to AI insights
- ๐ Search Capabilities: Text-based and semantic similarity search
๐งช Testing & Validation
AbstractMemory includes 200+ comprehensive tests with real implementations:
# Run all tests
python -m pytest tests/ -v
# Run specific test suites
python -m pytest tests/simple/ -v # Simple memory types
python -m pytest tests/components/ -v # Memory components
python -m pytest tests/storage/ -v # Storage system tests
python -m pytest tests/integration/ -v # Full system integration
# Test with real LLM providers (requires AbstractCore)
python -m pytest tests/integration/test_llm_real_usage.py -v
# Test comprehensive dual storage serialization
python -m pytest tests/storage/test_dual_storage_comprehensive.py -v
๐ AbstractLLM Ecosystem Integration
AbstractMemory seamlessly integrates with the broader ecosystem:
With AbstractCore
from abstractllm import create_llm
from abstractmemory import create_memory
# Create LLM provider
provider = create_llm("anthropic", model="claude-3-5-haiku-latest")
# Create memory with embedding integration
memory = create_memory(
"grounded",
enable_kg=True,
storage_backend="dual",
storage_path="./memory",
storage_uri="./memory.db",
embedding_provider=provider
)
# Use together in agent reasoning
context = memory.get_full_context(query)
response = provider.generate(prompt, system_prompt=context)
memory.add_interaction(query, response.content)
# Search stored memories with semantic similarity
similar_memories = memory.search_stored_interactions("related concepts")
With AbstractAgent (Future)
from abstractagent import create_agent
from abstractmemory import create_memory
# Autonomous agent with sophisticated memory
memory = create_memory("grounded", working_capacity=20)
agent = create_agent("autonomous", memory=memory, provider=provider)
# Agent automatically uses memory for consistency and personalization
response = agent.execute(task, user_id="alice")
๐๏ธ Architecture Principles
- No Over-Engineering: Memory complexity matches agent requirements
- Real Implementation Testing: No mocks - all tests use real implementations
- SOTA Research Foundation: Built on proven patterns (MemGPT, Zep, Graphiti)
- Clean Abstractions: Simple interfaces, powerful implementations
- Performance Optimized: Fast operations for simple agents, scalable for complex ones
๐ Performance Characteristics
- Simple Memory: < 1ms operations, minimal overhead
- Complex Memory: < 100ms context generation, efficient consolidation
- Scalability: Handles thousands of memory items efficiently
- Real LLM Integration: Context + LLM calls complete in seconds
๐ค Contributing
AbstractMemory is part of the AbstractLLM ecosystem. See CONTRIBUTING.md for development guidelines.
๐ License
[License details]
AbstractMemory: Smart memory for smart agents ๐ง โจ
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file abstractmemory-0.1.0.tar.gz.
File metadata
- Download URL: abstractmemory-0.1.0.tar.gz
- Upload date:
- Size: 29.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fd004a0d6b4a59dc942c4b1c7896c13a5beb9919c6a654e7c728b0e422057509
|
|
| MD5 |
73dccd77fdd1997994ab1fcdeacfe442
|
|
| BLAKE2b-256 |
c182239dee70438bcc388a0fb3ef5551e702df45c942da0ae11765646e00d035
|
File details
Details for the file abstractmemory-0.1.0-py3-none-any.whl.
File metadata
- Download URL: abstractmemory-0.1.0-py3-none-any.whl
- Upload date:
- Size: 36.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5247ae5efbcc42c6eda241326f712e32cf4f73cfc7aa4332bad6a2134430a2fa
|
|
| MD5 |
af378c5183a6d97db212069e8b15ff9f
|
|
| BLAKE2b-256 |
f32a68353cd5cb138e8191a742fa15b3860f22e80e61a0c04959bcab391c40e4
|