Multi-layered AI memory system with graph databases, vector stores, and intelligent processing pipelines
Project description
SmartMemory - Multi-Layered AI Memory System
Read the docs | Maya sample app
SmartMemory is a comprehensive AI memory system that provides persistent, multi-layered memory storage and retrieval for AI applications. It combines graph databases, vector stores, and intelligent processing pipelines to create a unified memory architecture.
๐ Quick Install
pip install smartmemory[local] # Local memory + MCP server + viewer + CLI (recommended)
pip install smartmemory # Remote client only (connects to a SmartMemory service)
pip install smartmemory-core[lite] # Core library only, local mode (for developers)
pip install smartmemory-core[server] # Core library only, server mode (FalkorDB + Redis)
smartmemoryis the distribution package โ MCP server, graph viewer, and CLI.smartmemory-coreis the core library for developers building on top of SmartMemory.smartmemory[local]bundlessmartmemory-core[lite]for local SQLite storage. Without[local], it's a remote client only.
SmartMemory Lite โ No Docker Required
from smartmemory.tools.factory import create_lite_memory, lite_context
# Simple usage โ full LLM extraction runs if OPENAI_API_KEY is set
memory = create_lite_memory()
item_id = memory.ingest("Alice leads Project Atlas")
results = memory.search("who leads Atlas", top_k=5)
# Preferred in scripts โ cleans up globals and closes SQLite on exit
with lite_context() as memory:
item_id = memory.ingest("Alice leads Project Atlas")
results = memory.search("who leads Atlas")
# Force no LLM calls (even if OPENAI_API_KEY is set)
from smartmemory.pipeline.config import PipelineConfig
memory = create_lite_memory(pipeline_profile=PipelineConfig.lite(llm_enabled=False))
Or via CLI:
smartmemory-core add "Alice leads Project Atlas"
smartmemory-core search "who leads Atlas"
smartmemory-core rebuild # Reindex vector store from graph data
smartmemory-core watch /path/to/vault # Auto-ingest new/changed .md files
Architecture Overview
SmartMemory implements a multi-layered memory architecture with the following components:
Core Components
- SmartMemory: Main unified memory interface (
smartmemory.smart_memory.SmartMemory) - SmartGraph: Graph database backend using FalkorDB for relationship storage
- Memory Types: Specialized memory stores for different data types
- Pipeline Stages: Processing stages for ingestion, enrichment, and evolution
- Plugin System: Extensible architecture for custom evolvers and enrichers
Memory Types
- Working Memory: Short-term context buffer (in-memory, capacity=10)
- Semantic Memory: Facts and concepts with vector embeddings
- Episodic Memory: Personal experiences and learning history
- Procedural Memory: Skills, strategies, and learned patterns
- Zettelkasten Memory: Bidirectional note-taking system with AI-powered knowledge discovery
- Reasoning Memory: Chain-of-thought traces capturing "why" decisions were made (System 2)
- Opinion Memory: Beliefs with confidence scores, reinforced or contradicted over time
- Observation Memory: Synthesized entity summaries from scattered facts
- Decision Memory: First-class decisions with confidence tracking, provenance chains, and lifecycle management
Storage Backends
- Lite mode: SQLite graph + usearch vectors โ no Docker, no external services
- Server mode: FalkorDB (graph + vectors) + Redis (caching) โ full-featured, requires Docker
Processing Pipeline
The memory ingestion flow processes data through several stages:
- Input Adaptation: Convert input data to MemoryItem format
- Classification: Determine appropriate memory type
- Extraction: Extract entities and relationships
- Storage: Persist to appropriate memory stores
- Linking: Create connections between related memories
- Enrichment: Enhance memories with additional context
- Evolution: Transform memories based on configured rules
Key Features
- Multi-Type Memory System: Working, Semantic, Episodic, and Procedural memory types
- Zero-Infra Lite Mode: SQLite + usearch backend โ
pip install smartmemory-core[lite]and go - Server Mode: FalkorDB graph + Redis caching for production-scale deployments
- Graph-Based Storage: Complex relationship modeling with vector similarity search
- Extensible Pipeline: Modular processing stages for ingestion and evolution
- Plugin Architecture: 30+ built-in plugins with external plugin support
- Plugin Security: Sandboxing, permissions, and resource limits for safe plugin execution
- Flexible Scoping: Optional
ScopeProviderfor multi-tenancy or unrestricted OSS usage - Zero Configuration: Works out-of-the-box for single-user applications
- Configuration Management: Flexible configuration with environment variable support
๐ฆ Installation
From PyPI (Recommended)
# Lite mode (zero infra โ SQLite + usearch, no Docker required)
pip install smartmemory-core[lite]
# Server mode (FalkorDB + Redis, requires Docker or manual install)
pip install smartmemory-core[server]
# Optional features
pip install smartmemory-core[cli] # CLI tools
pip install smartmemory-core[watch] # Vault watcher for Markdown files
pip install smartmemory-core[wikipedia] # Wikipedia enrichment
pip install smartmemory-core[all] # All optional features
From Source (Development)
git clone https://github.com/smart-memory/smart-memory.git
cd smart-memory-core
pip install -e ".[dev]"
# Install spaCy model for entity extraction
python -m spacy download en_core_web_sm
Infrastructure
Lite mode (smartmemory-core[lite]): No external services needed. SQLite and usearch are bundled.
Server mode (smartmemory-core[server]): Requires FalkorDB and Redis:
# Docker Compose (recommended) โ from repository root
docker-compose up -d
# Starts FalkorDB on port 9010, Redis on port 9012
# Or manually
docker run -d -p 9010:6379 falkordb/falkordb:latest
docker run -d -p 9012:6379 redis:7-alpine
# Verify
redis-cli -p 9010 PING # FalkorDB
redis-cli -p 9012 PING # Redis
๐ฏ Quick Start
Basic Usage (Lite Mode)
from smartmemory.tools.factory import create_lite_memory
# No Docker, no config โ just works
memory = create_lite_memory()
# Ingest a memory (full pipeline: extract โ store โ link โ enrich โ evolve)
item_id = memory.ingest("User prefers Python for data analysis tasks")
# Or use add() for simple storage without pipeline
item = MemoryItem(
content="Quick note about Python",
memory_type="semantic",
metadata={'topic': 'preferences'}
)
memory.add(item)
# Search memories (automatically scoped)
results = memory.search("Python programming", top_k=5)
for result in results:
print(f"Content: {result.content}")
print(f"Type: {result.memory_type}")
# Get memory summary
summary = memory.get_all_items_debug()
print(f"Total memories: {summary['total_items']}")
Using Different Memory Types
from smartmemory import SmartMemory, MemoryItem
# Initialize SmartMemory
memory = SmartMemory()
# Add working memory (short-term context)
working_item = MemoryItem(
content="Current conversation context",
memory_type="working"
)
memory.add(working_item)
# Add semantic memory (facts and concepts)
semantic_item = MemoryItem(
content="Python is a high-level programming language",
memory_type="semantic"
)
memory.add(semantic_item)
# Add episodic memory (experiences)
episodic_item = MemoryItem(
content="User completed Python tutorial on 2024-01-15",
memory_type="episodic"
)
memory.add(episodic_item)
# Add procedural memory (skills and procedures)
procedural_item = MemoryItem(
content="To sort a list in Python: use list.sort() or sorted(list)",
memory_type="procedural"
)
memory.add(procedural_item)
# Add Zettelkasten note (interconnected knowledge)
zettel_item = MemoryItem(
content="# Machine Learning\n\nML learns from data using algorithms.",
memory_type="zettel",
metadata={'title': 'ML Basics', 'tags': ['ai', 'ml']}
)
memory.add(zettel_item)
CLI Usage (Optional)
# Install CLI tools
pip install smartmemory-core[cli]
# Add a memory
smartmemory-core add "Python is great for AI" --memory-type semantic
# Search memories
smartmemory-core search "Python programming" --top-k 5
# Rebuild the vector index from graph data
smartmemory-core rebuild
# Auto-ingest new/changed Markdown files from a vault directory
pip install smartmemory-core[watch]
smartmemory-core watch /path/to/vault
Use Cases
Conversational AI Systems
- Maintain context across multiple conversation sessions
- Learn user preferences and adapt responses
- Build comprehensive user profiles over time
Educational Applications
- Track learning progress and adapt teaching strategies
- Remember previous topics and build upon them
- Personalize content based on individual learning patterns
Knowledge Management
- Store and retrieve complex information relationships
- Connect related concepts across different domains
- Evolve understanding through continuous learning
- Build a personal knowledge base with Zettelkasten method
Personal AI Assistants
- Remember user preferences and past interactions
- Provide contextually relevant recommendations
- Learn from user feedback to improve responses
Examples
The examples/ directory contains several demonstration scripts:
memory_system_usage_example.py: Basic memory operations (add, search, delete)zettelkasten_example.py: Complete Zettelkasten system demonstrationconversational_assistant_example.py: Conversational AI with memoryadvanced_programming_tutor.py: Educational application exampleworking_holistic_example.py: Comprehensive multi-session demobackground_processing_demo.py: Asynchronous processing example
Configuration
SmartMemory uses environment variables for configuration:
Environment Variables
Key environment variables:
OPENAI_API_KEY: OpenAI API key for embeddings and LLM extraction (auto-detected in lite mode)GROQ_API_KEY: Groq API key โ alternative to OpenAI for LLM extraction (auto-detected in lite mode)
Server mode only:
FALKORDB_HOST: FalkorDB server host (default: localhost)FALKORDB_PORT: FalkorDB server port (default: 9010)REDIS_HOST: Redis server host (default: localhost)REDIS_PORT: Redis server port (default: 9012)
# Lite mode โ only API key needed (optional, enables LLM extraction)
export OPENAI_API_KEY=your-api-key-here
# Server mode โ also needs database hosts
export FALKORDB_HOST=localhost
export FALKORDB_PORT=9010
export REDIS_HOST=localhost
export REDIS_PORT=9012
Memory Evolution
SmartMemory includes built-in evolvers that automatically transform memories:
Available Evolvers
- WorkingToEpisodicEvolver: Converts working memory to episodic when buffer is full
- WorkingToProceduralEvolver: Extracts repeated patterns as procedures
- EpisodicToSemanticEvolver: Promotes stable facts to semantic memory
- EpisodicToZettelEvolver: Converts episodic events to Zettelkasten notes
- EpisodicDecayEvolver: Archives old episodic memories
- SemanticDecayEvolver: Prunes low-relevance semantic facts
- ZettelPruneEvolver: Merges duplicate or low-quality notes
- DecisionConfidenceEvolver: Decays confidence on stale decisions, auto-retracts below threshold
Evolvers run automatically as part of the memory lifecycle. See the examples directory for evolution demonstrations.
Plugin System
SmartMemory features a unified, extensible plugin architecture that allows you to customize and extend functionality. All plugins follow a consistent class-based pattern.
Built-in Plugins
SmartMemory includes 30+ built-in plugins across 4 types:
- 7 Extractors: Extract entities and relationships
LLMExtractor,LLMSingleExtractor,ConversationAwareLLMExtractor,SpacyExtractor,HybridExtractor,DecisionExtractor,ReasoningExtractor
- 7 Enrichers: Add context and metadata to memories
BasicEnricher,SentimentEnricher,TemporalEnricher,TopicEnricher,SkillsToolsEnricher,WikipediaEnricher,LinkExpansionEnricher
- 1 Grounder: Connect to external knowledge
WikipediaGrounder
- 13 Evolvers: Transform memories based on rules
WorkingToEpisodicEvolver,WorkingToProceduralEvolver,EpisodicToSemanticEvolver,EpisodicToZettelEvolver,EpisodicDecayEvolver,SemanticDecayEvolver,ZettelPruneEvolver,DecisionConfidenceEvolver,OpinionSynthesisEvolver,ObservationSynthesisEvolver,OpinionReinforcementEvolver, etc.
Creating Custom Plugins
Create your own plugins by extending the base classes:
from smartmemory.plugins.base import EnricherPlugin, PluginMetadata
class MyCustomEnricher(EnricherPlugin):
@classmethod
def metadata(cls):
return PluginMetadata(
name="my_enricher",
version="1.0.0",
author="Your Name",
description="My custom enricher",
plugin_type="enricher",
dependencies=["some-lib>=1.0.0"],
security_profile="standard",
requires_network=False,
requires_llm=False
)
def enrich(self, item, node_ids=None):
# Your enrichment logic
item.metadata["custom_field"] = "value"
return item.metadata
See the examples directory for complete plugin examples.
Publishing Plugins
Publish your plugin as a Python package with entry points:
# pyproject.toml
[project.entry-points."smartmemory.plugins.enrichers"]
my_enricher = "my_package:MyCustomEnricher"
Install and use:
pip install my-smartmemory-plugin
# Automatically discovered and loaded!
Plugin Types
- ExtractorPlugin: Extract entities and relationships from text
- EnricherPlugin: Add metadata and context to memories
- GrounderPlugin: Link memories to external knowledge sources
- EvolverPlugin: Transform memories based on conditions
All plugins are automatically discovered and registered at startup.
Plugin Security
SmartMemory includes a comprehensive security system for plugins:
- 4 Security Profiles:
trusted,standard(default),restricted,untrusted - Permission System: Control memory, network, file, and LLM access
- Resource Limits: Automatic timeout (30s), memory limits, network request limits
- Sandboxing: Isolated execution with security enforcement
- Static Validation: Detects security issues before execution
# Plugins are secure by default
PluginMetadata(
security_profile="standard", # Balanced security
requires_network=True, # Explicitly declare requirements
requires_llm=False
)
See docs/PLUGIN_SECURITY.md for complete security documentation.
Examples
See the examples/ directory for complete plugin examples:
custom_enricher_example.py- Sentiment analysis and keyword extractioncustom_evolver_example.py- Memory promotion and archivalcustom_extractor_example.py- Regex and domain-specific NERcustom_grounder_example.py- DBpedia and custom API grounding
Testing
Run the test suite:
# Run all tests
PYTHONPATH=. pytest -v tests/
# Run specific test categories
PYTHONPATH=. pytest tests/unit/
PYTHONPATH=. pytest tests/integration/
PYTHONPATH=. pytest tests/e2e/
# Run examples
PYTHONPATH=. python examples/memory_system_usage_example.py
PYTHONPATH=. python examples/conversational_assistant_example.py
API Reference
SmartMemory Class
Main interface for memory operations:
class SmartMemory:
def __init__(
self,
scope_provider: Optional[ScopeProvider] = None,
vector_backend=None, # Any VectorStoreBackend; None uses default (FalkorDB)
cache=None, # Any cache-compatible object; e.g. NoOpCache()
observability: bool = True, # False disables Redis Streams emission and metrics
pipeline_profile=None, # PipelineConfig instance; PipelineConfig.lite() for zero-infra
entity_ruler_patterns=None, # Any object with get_patterns() -> dict[str, str]
)
# Primary API
def ingest(self, item, sync=True, **kwargs) -> str # Full pipeline
def add(self, item, **kwargs) -> str # Simple storage
def get(self, item_id: str) -> Optional[MemoryItem]
def search(self, query: str, top_k: int = 5, memory_type: str = None) -> List[MemoryItem]
def delete(self, item_id: str) -> bool
# Graph Integrity (v0.3.8+)
def delete_run(self, run_id: str) -> int # Delete entities by pipeline run
def rename_entity_type(self, old: str, new: str) -> int # Ontology evolution
def merge_entity_types(self, sources: List[str], target: str) -> int
# Advanced
def run_clustering(self) -> dict
def run_evolution_cycle(self) -> None
def personalize(self, traits: dict = None, preferences: dict = None) -> None
def get_all_items_debug(self) -> Dict[str, Any]
API Design:
ingest()- Full agentic pipeline: extract โ store โ link โ enrich โ evolve. Use for user-facing ingestion.add()- Simple storage: normalize โ store โ embed. Use for internal operations or when pipeline is not needed.
Scoping:
- OSS mode: No scoping needed, all data accessible
- For multi-tenant applications, pass a
ScopeProviderto enable automatic filtering - See
docs/SECURITY_AND_AUTH.mdfor complete details
MemoryItem Class
Core data structure for memory storage:
@dataclass
class MemoryItem:
content: str
memory_type: str = 'semantic'
item_id: str = field(default_factory=lambda: str(uuid.uuid4()))
valid_start_time: Optional[datetime] = None
valid_end_time: Optional[datetime] = None
transaction_time: datetime = field(default_factory=datetime.now)
embedding: Optional[List[float]] = None
entities: Optional[list] = None
relations: Optional[list] = None
metadata: dict = field(default_factory=dict)
Security Metadata:
- For OSS usage, security metadata fields are not needed
- For multi-tenant applications, use a
ScopeProviderfor automatic metadata injection - See
docs/SECURITY_AND_AUTH.mdfor details
Dependencies
Core Dependencies
SmartMemory requires the following key dependencies:
spacy: Natural language processing and entity extractiondspy: LLM programming framework for extraction and classificationlitellm: LLM integration layeropenai: OpenAI API client (for embeddings)scikit-learn: Machine learning utilitiespydantic: Data validationpython-dateutil: Date/time handlingvaderSentiment: Sentiment analysisjinja2: Template rendering
Lite mode adds: usearch (vector search), uses Python's built-in sqlite3.
Server mode adds: falkordb (graph + vector storage), redis (caching).
Optional Dependencies
Install additional features as needed:
# Modes
pip install smartmemory-core[lite] # Zero-infra local mode (SQLite + usearch, no Docker)
pip install smartmemory-core[server] # Server mode (FalkorDB + Redis)
# Tools
pip install smartmemory-core[cli] # Command-line interface (add, search, rebuild)
pip install smartmemory-core[watch] # Vault watcher for auto-ingesting Markdown files
# Integrations
pip install smartmemory-core[slack] # Slack integration
pip install smartmemory-core[aws] # AWS integration
pip install smartmemory-core[wikipedia] # Wikipedia enrichment
# Everything
pip install smartmemory-core[all] # All optional features
Contributing
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
For major changes, please open an issue first to discuss the proposed changes.
๐ License
SmartMemory is dual-licensed to provide flexibility for both open-source and commercial use. See LICENSE for details.
Security
SmartMemory takes plugin security seriously. All plugins run in a sandboxed environment with:
- โ Permission checks - Plugins must declare what they access
- โ Resource limits - Automatic timeouts and memory limits
- โ Execution isolation - Sandboxed plugin execution
- โ Static analysis - Security validation before execution
External plugins use the standard security profile by default. See docs/PLUGIN_SECURITY.md for details.
๐ Links
- ๐ฆ PyPI Package: https://pypi.org/project/smartmemory-core/
- ๐ Documentation: https://docs.smartmemory.ai
- ๐ GitHub Repository: https://github.com/smart-memory/smart-memory
- ๐ Issue Tracker: https://github.com/smart-memory/smart-memory/issues
- ๐ Security & Auth: See
docs/SECURITY_AND_AUTH.md - ๐ Plugin Security: See
docs/PLUGIN_SECURITY.md
Get started with SmartMemory today!
pip install smartmemory[local]
Explore the examples directory for complete demonstrations and use cases.
โ Recently Completed
Zero-Infra Lite Mode (v0.3.9+)
- โ
smartmemory[lite]: SQLite + usearch backend โ no Docker, no FalkorDB, no Redis required - โ
create_lite_memory(): Factory function fromsmartmemory.tools.factoryfor zero-config setup - โ
lite_context(): Context manager that cleans up globals and closes SQLite on exit - โ
PipelineConfig.lite(llm_enabled=None): Named preset disabling coreference, network enrichers, and evolution. LLM extraction auto-detected fromOPENAI_API_KEY/GROQ_API_KEYenv vars; override withllm_enabled=True/False - โ
smartmemory[watch]: Vault watcher for auto-ingesting new/changed Markdown files - โ
Constructor injection:
vector_backend,cache,observability,pipeline_profile,entity_ruler_patternsparams added toSmartMemory.__init__โ no monkey-patching required
Unified Pipeline v2 (v0.3.5)
- โ 11-stage composable pipeline: classify โ coreference โ simplify โ entity_ruler โ llm_extract โ ontology_constrain โ store โ link โ enrich โ ground โ evolve
- โ
Breakpoint execution:
run_to(),run_from(),undo_to()for debugging and resumption - โ Per-stage retry: Configurable retry policies with exponential backoff
- โ
Async mode:
ingest(sync=False)with Redis Streams transport - โ Pipeline metrics: Fire-and-forget metrics emission via Redis Streams
Self-Learning Ontology (v0.3.4)
- โ OntologyGraph: Dedicated FalkorDB graph for entity types with three-tier status (seed โ provisional โ confirmed)
- โ Promotion pipeline: Six-gate evaluation (name length, blocklist, confidence, frequency, consistency, LLM validation)
- โ Pattern manager: Hot-reloadable learned entity patterns with Redis pub/sub
- โ Layered ontology: Base + overlay subscription system with hide/unhide, pin/unpin
- โ Template catalog: 3 built-in templates (General, Software Engineering, Business & Finance)
Reasoning & Validation (v0.3.2)
- โ
Graph validation:
MemoryValidator,EdgeValidatorfor schema enforcement - โ
Health metrics:
GraphHealthCheckerwith orphan ratio, provenance coverage - โ Inference engine: Pattern-matching rules for automatic edge creation
- โ Symbolic reasoning: Residuation, query routing, proof trees, fuzzy confidence
Decision Memory (v0.3.0)
- โ First-class decisions: Confidence tracking, provenance chains, lifecycle management
- โ DecisionConfidenceEvolver: Evidence-based reinforcement/contradiction with decay
- โ Conflict detection: Semantic search + content overlap heuristic
- โ Causal chains: Recursive traversal of DERIVED_FROM, CAUSED_BY, INFLUENCES edges
Synthesis Memory (v0.2.6+)
- โ Opinion memory: Beliefs with confidence scores, reinforced/contradicted over time
- โ Observation memory: Synthesized entity summaries from scattered facts
- โ Reasoning memory: Chain-of-thought traces capturing "why" decisions were made
See CHANGELOG.md for complete version history.
Check the GitHub repository for the latest updates.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smartmemory_core-0.5.4.tar.gz.
File metadata
- Download URL: smartmemory_core-0.5.4.tar.gz
- Upload date:
- Size: 697.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0728d8b000472a25596cc0b6dcbfa41a30f8e6e40ae9ec74a0893cd769119d53
|
|
| MD5 |
7c673e76bfffbe657a81576e7bacb467
|
|
| BLAKE2b-256 |
ad95b55eddb8b7e7558968cb94192f82f71b5ff72d076b5028f6e8557577a18b
|
File details
Details for the file smartmemory_core-0.5.4-py3-none-any.whl.
File metadata
- Download URL: smartmemory_core-0.5.4-py3-none-any.whl
- Upload date:
- Size: 894.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f224cfa90754c143db93e59dc834fa56862bade3ec0ff990bcb81a7f3fc439c1
|
|
| MD5 |
1aad1b02d8f5abce17839d52ae0fbef9
|
|
| BLAKE2b-256 |
3aa85d5e2ccace1646d88b368c3056eb10d624c4bff1b4f336f867473f5b0265
|