The Open-Source Memory Layer for AI Agents & Multi-Agent Systems
Project description
memori
Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems
Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.
๐ฏ Philosophy
- Second-memory for all your LLM work - Never repeat context again
- Dual-mode memory injection - Conscious short-term memory + Auto intelligent search
- Flexible database connections - SQLite, PostgreSQL, MySQL support
- Pydantic-based intelligence - Structured memory processing with validation
- Simple, reliable architecture - Just works out of the box
โก Quick Start
Install Memori:
pip install memorisdk
Example with LiteLLM
- Install LiteLLM:
pip install litellm
- Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
- Run this Python script:
from memori import Memori
from litellm import completion
# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()
print("=== First Conversation - Establishing Context ===")
response1 = completion(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "I'm working on a Python FastAPI project"
}]
)
print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")
response2 = completion(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "Help me add user authentication"
}]
)
print("Assistant:", response2.choices[0].message.content)
print("\n๐ก Notice: Memori automatically knows about your FastAPI Python project!")
๐ Ready to explore more?
- ๐ Examples - Basic usage patterns and code samples
- ๐ Framework Integrations - LangChain, Agno & CrewAI examples
- ๐ฎ Interactive Demos - Live applications & tutorials
๐ง How It Works
1. Universal Recording
office_work.enable() # Records ALL LLM conversations automatically
2. Intelligent Processing
- Entity Extraction: Extracts people, technologies, projects
- Smart Categorization: Facts, preferences, skills, rules
- Pydantic Validation: Structured, type-safe memory storage
3. Dual Memory Modes
๐ง Conscious Mode - Short-Term Working Memory
conscious_ingest=True # One-shot short-term memory injection
- At Startup: Conscious agent analyzes long-term memory patterns
- Memory Promotion: Moves essential conversations to short-term storage
- One-Shot Injection: Injects working memory once at conversation start
- Like Human Short-Term Memory: Names, current projects, preferences readily available
๐ Auto Mode - Dynamic Database Search
auto_ingest=True # Continuous intelligent memory retrieval
- Every LLM Call: Retrieval agent analyzes user query intelligently
- Full Database Search: Searches through entire memory database
- Context-Aware: Injects relevant memories based on current conversation
- Performance Optimized: Caching, async processing, background threads
๐ง Memory Modes Explained
Conscious Mode - Short-Term Working Memory
# Mimics human conscious memory - essential info readily available
memori = Memori(
database_connect="sqlite:///my_memory.db",
conscious_ingest=True, # ๐ง Short-term working memory
openai_api_key="sk-..."
)
How Conscious Mode Works:
- At Startup: Conscious agent analyzes long-term memory patterns
- Essential Selection: Promotes 5-10 most important conversations to short-term
- One-Shot Injection: Injects this working memory once at conversation start
- No Repeats: Won't inject again during the same session
Auto Mode - Dynamic Intelligent Search
# Searches entire database dynamically based on user queries
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # ๐ Smart database search
openai_api_key="sk-..."
)
How Auto Mode Works:
- Every LLM Call: Retrieval agent analyzes user input
- Query Planning: Uses AI to understand what memories are needed
- Smart Search: Searches through entire database (short-term + long-term)
- Context Injection: Injects 3-5 most relevant memories per call
Combined Mode - Best of Both Worlds
# Get both working memory AND dynamic search
memori = Memori(
conscious_ingest=True, # Working memory once
auto_ingest=True, # Dynamic search every call
openai_api_key="sk-..."
)
Intelligence Layers:
- Memory Agent - Processes every conversation with Pydantic structured outputs
- Conscious Agent - Analyzes patterns, promotes long-term โ short-term memories
- Retrieval Agent - Intelligently searches and selects relevant context
What gets prioritized in Conscious Mode:
- ๐ค Personal Identity: Your name, role, location, basic info
- โค๏ธ Preferences & Habits: What you like, work patterns, routines
- ๐ ๏ธ Skills & Tools: Technologies you use, expertise areas
- ๐ Current Projects: Ongoing work, learning goals
- ๐ค Relationships: Important people, colleagues, connections
- ๐ Repeated References: Information you mention frequently
๐๏ธ Memory Types
| Type | Purpose | Example | Auto-Promoted |
|---|---|---|---|
| Facts | Objective information | "I use PostgreSQL for databases" | โ High frequency |
| Preferences | User choices | "I prefer clean, readable code" | โ Personal identity |
| Skills | Abilities & knowledge | "Experienced with FastAPI" | โ Expertise areas |
| Rules | Constraints & guidelines | "Always write tests first" | โ Work patterns |
| Context | Session information | "Working on e-commerce project" | โ Current projects |
๐ง Configuration
Simple Setup
from memori import Memori
# Conscious mode - Short-term working memory
memori = Memori(
database_connect="sqlite:///my_memory.db",
template="basic",
conscious_ingest=True, # One-shot context injection
openai_api_key="sk-..."
)
# Auto mode - Dynamic database search
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # Continuous memory retrieval
openai_api_key="sk-..."
)
# Combined mode - Best of both worlds
memori = Memori(
conscious_ingest=True, # Working memory +
auto_ingest=True, # Dynamic search
openai_api_key="sk-..."
)
Advanced Configuration
from memori import Memori, ConfigManager
# Load from memori.json or environment
config = ConfigManager()
config.auto_load()
memori = Memori()
memori.enable()
Create memori.json:
{
"database": {
"connection_string": "postgresql://user:pass@localhost/memori"
},
"agents": {
"openai_api_key": "sk-...",
"conscious_ingest": true,
"auto_ingest": false
},
"memory": {
"namespace": "my_project",
"retention_policy": "30_days"
}
}
๐ Universal Integration
Works with ANY LLM library:
memori.enable() # Enable universal recording
# LiteLLM (recommended)
from litellm import completion
completion(model="gpt-4", messages=[...])
# OpenAI
import openai
client = openai.OpenAI()
client.chat.completions.create(...)
# Anthropic
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)
# All automatically recorded and contextualized!
๐ ๏ธ Memory Management
Automatic Background Analysis
# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable() # Starts background conscious agent
# Manual analysis trigger
memori.trigger_conscious_analysis()
# Get essential conversations
essential = memori.get_essential_conversations(limit=5)
Memory Retrieval Tools
from memori.tools import create_memory_tool
# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)
# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)
Context Control
# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories
# Search by category
skills = memori.search_memories_by_category("skill", limit=10)
# Get memory statistics
stats = memori.get_memory_stats()
๐ Database Schema
-- Core tables created automatically
chat_history # All conversations
short_term_memory # Recent context (expires)
long_term_memory # Permanent insights
rules_memory # User preferences
memory_entities # Extracted entities
memory_relationships # Entity connections
๐ Project Structure
memori/
โโโ core/ # Main Memori class, database manager
โโโ agents/ # Memory processing with Pydantic
โโโ database/ # SQLite/PostgreSQL/MySQL support
โโโ integrations/ # LiteLLM, OpenAI, Anthropic
โโโ config/ # Configuration management
โโโ utils/ # Helpers, validation, logging
โโโ tools/ # Memory search tools
Examples
- Basic Usage - Simple memory setup with conscious ingestion
- Personal Assistant - AI assistant with intelligent memory
- Memory Retrieval - Function calling with memory tools
- Advanced Config - Production configuration
- Interactive Demo - Live conscious ingestion showcase
Framework Integrations
Memori works seamlessly with popular AI frameworks:
| Framework | Description | Example | Features |
|---|---|---|---|
| ๐ค Agno | Memory-enhanced agent framework integration with persistent conversations | Simple chat agent with memory search | Memory tools, conversation persistence, contextual responses |
| ๐ฅ CrewAI | Multi-agent system with shared memory across agent interactions | Collaborative agents with memory | Agent coordination, shared memory, task-based workflows |
| ๐ Digital Ocean AI | Memory-enhanced customer support using Digital Ocean's AI platform | Customer support assistant with conversation history | Context injection, session continuity, support analytics |
| ๐ LangChain | Enterprise-grade agent framework with advanced memory integration | AI assistant with LangChain tools and memory | Custom tools, agent executors, memory persistence, error handling |
| ๐ Swarms | Multi-agent system framework with persistent memory capabilities | Memory-enhanced Swarms agents with auto/conscious ingestion | Agent memory persistence, multi-agent coordination, contextual awareness |
Interactive Demos
Explore Memori's capabilities through these interactive demonstrations:
| Title | Description | Tools Used | Live Demo |
|---|---|---|---|
| ๐ Personal Diary Assistant | A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. | Streamlit, LiteLLM, OpenAI, SQLite | Run Demo |
| ๐ Travel Planner Agent | Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. | CrewAI, Streamlit, OpenAI, SQLite | |
| ๐งโ๐ฌ Researcher Agent | Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. | Agno, Streamlit, OpenAI, ExaAI, SQLite | Run Demo |
๐ค Contributing
- See CONTRIBUTING.md for development setup and guidelines.
- Community: Discord
๐ License
MIT License - see LICENSE for details.
Made for developers who want their AI agents to remember and learn
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorisdk-1.0.2.tar.gz.
File metadata
- Download URL: memorisdk-1.0.2.tar.gz
- Upload date:
- Size: 77.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5fbc2ac21e6f0239e381445b43acc12a0b85d7301aee0c811bbfde87bcb63701
|
|
| MD5 |
3e9680b608f759a9babfd515484a41f4
|
|
| BLAKE2b-256 |
8e9829fef1b525774d4a735fc93ff807ba02ea5dcbf2f443e71d0d77cf899670
|
Provenance
The following attestation bundles were made for memorisdk-1.0.2.tar.gz:
Publisher:
release.yml on GibsonAI/memori
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorisdk-1.0.2.tar.gz -
Subject digest:
5fbc2ac21e6f0239e381445b43acc12a0b85d7301aee0c811bbfde87bcb63701 - Sigstore transparency entry: 421024986
- Sigstore integration time:
-
Permalink:
GibsonAI/memori@e7e1d906c450659738e63d4c2c84ff7e74a6e0da -
Branch / Tag:
refs/heads/main - Owner: https://github.com/GibsonAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@e7e1d906c450659738e63d4c2c84ff7e74a6e0da -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file memorisdk-1.0.2-py3-none-any.whl.
File metadata
- Download URL: memorisdk-1.0.2-py3-none-any.whl
- Upload date:
- Size: 86.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
67dd5c267a3daed4692bd44fcb455fe0c6acbb6a257d5f8da7f100cb8e9e206e
|
|
| MD5 |
aea93b22d4f7361517e7e1c12b46fcda
|
|
| BLAKE2b-256 |
7db0b850986ede6dcb4431c9d1927870e7bda38c4ada87e063a3960f38f79e23
|
Provenance
The following attestation bundles were made for memorisdk-1.0.2-py3-none-any.whl:
Publisher:
release.yml on GibsonAI/memori
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memorisdk-1.0.2-py3-none-any.whl -
Subject digest:
67dd5c267a3daed4692bd44fcb455fe0c6acbb6a257d5f8da7f100cb8e9e206e - Sigstore transparency entry: 421025021
- Sigstore integration time:
-
Permalink:
GibsonAI/memori@e7e1d906c450659738e63d4c2c84ff7e74a6e0da -
Branch / Tag:
refs/heads/main - Owner: https://github.com/GibsonAI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@e7e1d906c450659738e63d4c2c84ff7e74a6e0da -
Trigger Event:
workflow_dispatch
-
Statement type: