The Open-Source Memory Layer for AI Agents & Multi-Agent Systems
Project description
Memoriai
The Open-Source Memory Layer for AI Agents & Multi-Agent Systems v1.2
Give your AI agents structured, persistent memory with intelligent context injection - no more repeating yourself!
🎯 Philosophy
- Second-memory for all your LLM work - Never repeat context again
- Dual-mode memory injection - Conscious short-term memory + Auto intelligent search
- Flexible database connections - SQLite, PostgreSQL, MySQL support
- Pydantic-based intelligence - Structured memory processing with validation
- Simple, reliable architecture - Just works out of the box
⚡ Quick Start
pip install memoriai
from memoriai import Memori
# Create your workspace memory with conscious mode
office_work = Memori(
database_connect="sqlite:///office_memory.db",
conscious_ingest=True, # Short-term working memory (one-shot context)
openai_api_key="your-key"
)
office_work.enable() # Start recording conversations
# Use ANY LLM library - context automatically injected!
from litellm import completion
response = completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Help me with Python testing"}]
)
# ✨ Short-term working memory automatically included once per session
🧠 How It Works
1. Universal Recording
office_work.enable() # Records ALL LLM conversations automatically
2. Intelligent Processing
- Entity Extraction: Extracts people, technologies, projects
- Smart Categorization: Facts, preferences, skills, rules
- Pydantic Validation: Structured, type-safe memory storage
3. Dual Memory Modes
🧠 Conscious Mode - Short-Term Working Memory
conscious_ingest=True # One-shot short-term memory injection
- At Startup: Conscious agent analyzes long-term memory patterns
- Memory Promotion: Moves essential conversations to short-term storage
- One-Shot Injection: Injects working memory once at conversation start
- Like Human Short-Term Memory: Names, current projects, preferences readily available
🔍 Auto Mode - Dynamic Database Search
auto_ingest=True # Continuous intelligent memory retrieval
- Every LLM Call: Retrieval agent analyzes user query intelligently
- Full Database Search: Searches through entire memory database
- Context-Aware: Injects relevant memories based on current conversation
- Performance Optimized: Caching, async processing, background threads
🧠 Memory Modes Explained
Conscious Mode - Short-Term Working Memory
# Mimics human conscious memory - essential info readily available
memori = Memori(
database_connect="sqlite:///my_memory.db",
conscious_ingest=True, # 🧠 Short-term working memory
openai_api_key="sk-..."
)
How Conscious Mode Works:
- At Startup: Conscious agent analyzes long-term memory patterns
- Essential Selection: Promotes 5-10 most important conversations to short-term
- One-Shot Injection: Injects this working memory once at conversation start
- No Repeats: Won't inject again during the same session
Auto Mode - Dynamic Intelligent Search
# Searches entire database dynamically based on user queries
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # 🔍 Smart database search
openai_api_key="sk-..."
)
How Auto Mode Works:
- Every LLM Call: Retrieval agent analyzes user input
- Query Planning: Uses AI to understand what memories are needed
- Smart Search: Searches through entire database (short-term + long-term)
- Context Injection: Injects 3-5 most relevant memories per call
Combined Mode - Best of Both Worlds
# Get both working memory AND dynamic search
memori = Memori(
conscious_ingest=True, # Working memory once
auto_ingest=True, # Dynamic search every call
openai_api_key="sk-..."
)
Intelligence Layers:
- Memory Agent - Processes every conversation with Pydantic structured outputs
- Conscious Agent - Analyzes patterns, promotes long-term → short-term memories
- Retrieval Agent - Intelligently searches and selects relevant context
What gets prioritized in Conscious Mode:
- 👤 Personal Identity: Your name, role, location, basic info
- ❤️ Preferences & Habits: What you like, work patterns, routines
- 🛠️ Skills & Tools: Technologies you use, expertise areas
- 📊 Current Projects: Ongoing work, learning goals
- 🤝 Relationships: Important people, colleagues, connections
- 🔄 Repeated References: Information you mention frequently
🗄️ Memory Types
| Type | Purpose | Example | Auto-Promoted |
|---|---|---|---|
| Facts | Objective information | "I use PostgreSQL for databases" | ✅ High frequency |
| Preferences | User choices | "I prefer clean, readable code" | ✅ Personal identity |
| Skills | Abilities & knowledge | "Experienced with FastAPI" | ✅ Expertise areas |
| Rules | Constraints & guidelines | "Always write tests first" | ✅ Work patterns |
| Context | Session information | "Working on e-commerce project" | ✅ Current projects |
🔧 Configuration
Simple Setup
from memoriai import Memori
# Conscious mode - Short-term working memory
memori = Memori(
database_connect="sqlite:///my_memory.db",
template="basic",
conscious_ingest=True, # One-shot context injection
openai_api_key="sk-..."
)
# Auto mode - Dynamic database search
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # Continuous memory retrieval
openai_api_key="sk-..."
)
# Combined mode - Best of both worlds
memori = Memori(
conscious_ingest=True, # Working memory +
auto_ingest=True, # Dynamic search
openai_api_key="sk-..."
)
Advanced Configuration
from memoriai import Memori, ConfigManager
# Load from memori.json or environment
config = ConfigManager()
config.auto_load()
memori = Memori()
memori.enable()
Create memori.json:
{
"database": {
"connection_string": "postgresql://user:pass@localhost/memori"
},
"agents": {
"openai_api_key": "sk-...",
"conscious_ingest": true,
"auto_ingest": false
},
"memory": {
"namespace": "my_project",
"retention_policy": "30_days"
}
}
🔌 Universal Integration
Works with ANY LLM library:
memori.enable() # Enable universal recording
# LiteLLM (recommended)
from litellm import completion
completion(model="gpt-4", messages=[...])
# OpenAI
import openai
client = openai.OpenAI()
client.chat.completions.create(...)
# Anthropic
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)
# All automatically recorded and contextualized!
🛠️ Memory Management
Automatic Background Analysis
# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable() # Starts background conscious agent
# Manual analysis trigger
memori.trigger_conscious_analysis()
# Get essential conversations
essential = memori.get_essential_conversations(limit=5)
Memory Retrieval Tools
from memoriai.tools import create_memory_tool
# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)
# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)
Context Control
# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories
# Search by category
skills = memori.search_memories_by_category("skill", limit=10)
# Get memory statistics
stats = memori.get_memory_stats()
📋 Database Schema
-- Core tables created automatically
chat_history # All conversations
short_term_memory # Recent context (expires)
long_term_memory # Permanent insights
rules_memory # User preferences
memory_entities # Extracted entities
memory_relationships # Entity connections
📁 Project Structure
memoriai/
├── core/ # Main Memori class, database manager
├── agents/ # Memory processing with Pydantic
├── database/ # SQLite/PostgreSQL/MySQL support
├── integrations/ # LiteLLM, OpenAI, Anthropic
├── config/ # Configuration management
├── utils/ # Helpers, validation, logging
└── tools/ # Memory search tools
🚀 Examples
- Basic Usage - Simple memory setup with conscious ingestion
- Personal Assistant - AI assistant with intelligent memory
- Memory Retrieval - Function calling with memory tools
- Advanced Config - Production configuration
- Interactive Demo - Live conscious ingestion showcase
🤝 Contributing
See CONTRIBUTING.md for development setup and guidelines.
📄 License
MIT License - see LICENSE for details.
Made for developers who want their AI agents to remember and learn
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorisdk-1.0.0.tar.gz.
File metadata
- Download URL: memorisdk-1.0.0.tar.gz
- Upload date:
- Size: 75.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c7aac3a0446abc68e0d678549a54a60a7ba2f0ef360095c1b9cc5348eea933e
|
|
| MD5 |
6f588320c6d55b2916e86531b93dd0c3
|
|
| BLAKE2b-256 |
9cdb831af5cb17e30c4ef18d96a016d879908375dbaee7daef6cbbab8b5f2021
|
Provenance
The following attestation bundles were made for memorisdk-1.0.0.tar.gz:
Publisher:
release.yml on GibsonAI/memori
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorisdk-1.0.0.tar.gz -
Subject digest:
3c7aac3a0446abc68e0d678549a54a60a7ba2f0ef360095c1b9cc5348eea933e - Sigstore transparency entry: 347566729
- Sigstore integration time:
-
Permalink:
GibsonAI/memori@5ed958d61f041ab3c3b1dcd115687d39a34bac9b -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/GibsonAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5ed958d61f041ab3c3b1dcd115687d39a34bac9b -
Trigger Event:
push
-
Statement type:
File details
Details for the file memorisdk-1.0.0-py3-none-any.whl.
File metadata
- Download URL: memorisdk-1.0.0-py3-none-any.whl
- Upload date:
- Size: 84.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
101cd021130f05c3d521d7dc6ee603ac3f703dc595da1745c46dc1934e9b318f
|
|
| MD5 |
3900fd94b7b5b2b46396b8d4bfa57c07
|
|
| BLAKE2b-256 |
8f5ba2f455b058f5eb990c1ba0aeee92ee09db3350b8135798def4b312dd2746
|
Provenance
The following attestation bundles were made for memorisdk-1.0.0-py3-none-any.whl:
Publisher:
release.yml on GibsonAI/memori
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorisdk-1.0.0-py3-none-any.whl -
Subject digest:
101cd021130f05c3d521d7dc6ee603ac3f703dc595da1745c46dc1934e9b318f - Sigstore transparency entry: 347566734
- Sigstore integration time:
-
Permalink:
GibsonAI/memori@5ed958d61f041ab3c3b1dcd115687d39a34bac9b -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/GibsonAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5ed958d61f041ab3c3b1dcd115687d39a34bac9b -
Trigger Event:
push
-
Statement type: