Skip to main content

The Open-Source Memory Layer for AI Agents & Multi-Agent Systems

Project description

GibsonAI

memori

An open-source SQL-Native memory engine for AI

From Postgres to MySQL, Memori plugs into the SQL databases you already use. Simple setup, infinite scale without new infrastructure.

Learn more ยท Join Discord

PyPI version Downloads License: MIT Python 3.8+


What is Memori

Memori uses structured entity extraction, relationship mapping, and SQL-based retrieval to create transparent, portable, and queryable AI memory. Memomi uses multiple agents working together to intelligently promote essential long-term memories to short-term storage for faster context injection.

With a single line of code memori.enable() any LLM gains the ability to remember conversations, learn from interactions, and maintain context across sessions. The entire memory system is stored in a standard SQLite database (or PostgreSQL/MySQL for enterprise deployments), making it fully portable, auditable, and owned by the user.

Key Differentiators

  • Radical Simplicity: One line to enable memory for any LLM framework (OpenAI, Anthropic, LiteLLM, LangChain)
  • True Data Ownership: Memory stored in standard SQL databases that users fully control
  • Complete Transparency: Every memory decision is queryable with SQL and fully explainable
  • Zero Vendor Lock-in: Export your entire memory as a SQLite file and move anywhere
  • Cost Efficiency: 80-90% cheaper than vector database solutions at scale
  • Compliance Ready: SQL-based storage enables audit trails, data residency, and regulatory compliance

โšก Quick Start

Install Memori:

pip install memorisdk

Example with OpenAI

  1. Install OpenAI:
pip install openai
  1. Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
  1. Run this Python script:
from memori import Memori
from openai import OpenAI

# Initialize OpenAI client
openai_client = OpenAI()

# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()

print("=== First Conversation - Establishing Context ===")
response1 = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{
        "role": "user", 
        "content": "I'm working on a Python FastAPI project"
    }]
)

print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")

response2 = openai_client.chat.completions.create(
    model="gpt-4o-mini", 
    messages=[{
        "role": "user",
        "content": "Help me add user authentication"
    }]
)
print("Assistant:", response2.choices[0].message.content)
print("\n๐Ÿ’ก Notice: Memori automatically knows about your FastAPI Python project!")

By default, Memori uses in-memory SQLite database. Get FREE serverless database instance in GibsonAI platform.

๐Ÿš€ Ready to explore more?


๐Ÿง  How It Works

1. Universal Recording

office_work.enable()  # Records ALL LLM conversations automatically

2. Intelligent Processing

  • Entity Extraction: Extracts people, technologies, projects
  • Smart Categorization: Facts, preferences, skills, rules
  • Pydantic Validation: Structured, type-safe memory storage

3. Dual Memory Modes

๐Ÿง  Conscious Mode - Short-Term Working Memory

conscious_ingest=True  # One-shot short-term memory injection
  • At Startup: Conscious agent analyzes long-term memory patterns
  • Memory Promotion: Moves essential conversations to short-term storage
  • One-Shot Injection: Injects working memory once at conversation start
  • Like Human Short-Term Memory: Names, current projects, preferences readily available

๐Ÿ” Auto Mode - Dynamic Database Search

auto_ingest=True  # Continuous intelligent memory retrieval
  • Every LLM Call: Retrieval agent analyzes user query intelligently
  • Full Database Search: Searches through entire memory database
  • Context-Aware: Injects relevant memories based on current conversation
  • Performance Optimized: Caching, async processing, background threads

๐Ÿง  Memory Modes Explained

Conscious Mode - Short-Term Working Memory

# Mimics human conscious memory - essential info readily available
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,  # ๐Ÿง  Short-term working memory
    openai_api_key="sk-..."
)

How Conscious Mode Works:

  1. At Startup: Conscious agent analyzes long-term memory patterns
  2. Essential Selection: Promotes 5-10 most important conversations to short-term
  3. One-Shot Injection: Injects this working memory once at conversation start
  4. No Repeats: Won't inject again during the same session

Auto Mode - Dynamic Intelligent Search

# Searches entire database dynamically based on user queries
memori = Memori(
    database_connect="sqlite:///my_memory.db", 
    auto_ingest=True,  # ๐Ÿ” Smart database search
    openai_api_key="sk-..."
)

How Auto Mode Works:

  1. Every LLM Call: Retrieval agent analyzes user input
  2. Query Planning: Uses AI to understand what memories are needed
  3. Smart Search: Searches through entire database (short-term + long-term)
  4. Context Injection: Injects 3-5 most relevant memories per call

Combined Mode - Best of Both Worlds

# Get both working memory AND dynamic search
memori = Memori(
    conscious_ingest=True,  # Working memory once
    auto_ingest=True,       # Dynamic search every call
    openai_api_key="sk-..."
)

Intelligence Layers:

  1. Memory Agent - Processes every conversation with Pydantic structured outputs
  2. Conscious Agent - Analyzes patterns, promotes long-term โ†’ short-term memories
  3. Retrieval Agent - Intelligently searches and selects relevant context

What gets prioritized in Conscious Mode:

  • ๐Ÿ‘ค Personal Identity: Your name, role, location, basic info
  • โค๏ธ Preferences & Habits: What you like, work patterns, routines
  • ๐Ÿ› ๏ธ Skills & Tools: Technologies you use, expertise areas
  • ๐Ÿ“Š Current Projects: Ongoing work, learning goals
  • ๐Ÿค Relationships: Important people, colleagues, connections
  • ๐Ÿ”„ Repeated References: Information you mention frequently

๐Ÿ—„๏ธ Memory Types

Type Purpose Example Auto-Promoted
Facts Objective information "I use PostgreSQL for databases" โœ… High frequency
Preferences User choices "I prefer clean, readable code" โœ… Personal identity
Skills Abilities & knowledge "Experienced with FastAPI" โœ… Expertise areas
Rules Constraints & guidelines "Always write tests first" โœ… Work patterns
Context Session information "Working on e-commerce project" โœ… Current projects

๐Ÿ”ง Configuration

Simple Setup

from memori import Memori

# Conscious mode - Short-term working memory
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    template="basic", 
    conscious_ingest=True,  # One-shot context injection
    openai_api_key="sk-..."
)

# Auto mode - Dynamic database search
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    auto_ingest=True,  # Continuous memory retrieval
    openai_api_key="sk-..."
)

# Combined mode - Best of both worlds
memori = Memori(
    conscious_ingest=True,  # Working memory + 
    auto_ingest=True,       # Dynamic search
    openai_api_key="sk-..."
)

Advanced Configuration

from memori import Memori, ConfigManager

# Load from memori.json or environment
config = ConfigManager()
config.auto_load()

memori = Memori()
memori.enable()

Create memori.json:

{
  "database": {
    "connection_string": "postgresql://user:pass@localhost/memori"
  },
  "agents": {
    "openai_api_key": "sk-...",
    "conscious_ingest": true,
    "auto_ingest": false
  },
  "memory": {
    "namespace": "my_project",
    "retention_policy": "30_days"
  }
}

๐Ÿ”Œ Universal Integration

Works with ANY LLM library:

memori.enable()  # Enable universal recording

# OpenAI
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(...)

# LiteLLM
from litellm import completion
completion(model="gpt-4", messages=[...])

# Anthropic  
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)

# All automatically recorded and contextualized!

๐Ÿ› ๏ธ Memory Management

Automatic Background Analysis

# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable()  # Starts background conscious agent

# Manual analysis trigger
memori.trigger_conscious_analysis()

# Get essential conversations
essential = memori.get_essential_conversations(limit=5)

Memory Retrieval Tools

from memori.tools import create_memory_tool

# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)

# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)

Context Control

# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories

# Search by category
skills = memori.search_memories_by_category("skill", limit=10)

# Get memory statistics
stats = memori.get_memory_stats()

๐Ÿ“‹ Database Schema

-- Core tables created automatically
chat_history        # All conversations
short_term_memory   # Recent context (expires)
long_term_memory    # Permanent insights  
rules_memory        # User preferences
memory_entities     # Extracted entities
memory_relationships # Entity connections

๐Ÿ“ Project Structure

memori/
โ”œโ”€โ”€ core/           # Main Memori class, database manager
โ”œโ”€โ”€ agents/         # Memory processing with Pydantic  
โ”œโ”€โ”€ database/       # SQLite/PostgreSQL/MySQL support
โ”œโ”€โ”€ integrations/   # LiteLLM, OpenAI, Anthropic
โ”œโ”€โ”€ config/         # Configuration management
โ”œโ”€โ”€ utils/          # Helpers, validation, logging
โ””โ”€โ”€ tools/          # Memory search tools

Examples

Framework Integrations

Memori works seamlessly with popular AI frameworks:

Framework Description Example
AgentOps Track and monitor Memori memory operations with comprehensive observability Memory operation tracking with AgentOps analytics
Agno Memory-enhanced agent framework integration with persistent conversations Simple chat agent with memory search
AWS Strands Professional development coach with Strands SDK and persistent memory Career coaching agent with goal tracking
Azure AI Foundry Azure AI Foundry agents with persistent memory across conversations Enterprise AI agents with Azure integration
CamelAI Multi-agent communication framework with automatic memory recording and retrieval Memory-enhanced chat agents with conversation continuity
CrewAI Multi-agent system with shared memory across agent interactions Collaborative agents with memory
Digital Ocean AI Memory-enhanced customer support using Digital Ocean's AI platform Customer support assistant with conversation history
LangChain Enterprise-grade agent framework with advanced memory integration AI assistant with LangChain tools and memory
OpenAI Agent Memory-enhanced OpenAI Agent with function calling and user preference tracking Interactive assistant with memory search and user info storage
Swarms Multi-agent system framework with persistent memory capabilities Memory-enhanced Swarms agents with auto/conscious ingestion

Interactive Demos

Explore Memori's capabilities through these interactive demonstrations:

Title Description Tools Used Live Demo
๐ŸŒŸ Personal Diary Assistant A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. Streamlit, LiteLLM, OpenAI, SQLite Run Demo
๐ŸŒ Travel Planner Agent Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. CrewAI, Streamlit, OpenAI, SQLite
๐Ÿง‘โ€๐Ÿ”ฌ Researcher Agent Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. Agno, Streamlit, OpenAI, ExaAI, SQLite Run Demo

๐Ÿค Contributing

๐Ÿ“„ License

MIT License - see LICENSE for details.


Made for developers who want their AI agents to remember and learn

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memorisdk-2.1.1.tar.gz (189.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memorisdk-2.1.1-py3-none-any.whl (217.6 kB view details)

Uploaded Python 3

File details

Details for the file memorisdk-2.1.1.tar.gz.

File metadata

  • Download URL: memorisdk-2.1.1.tar.gz
  • Upload date:
  • Size: 189.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memorisdk-2.1.1.tar.gz
Algorithm Hash digest
SHA256 81b4c7dc6df72dbb404789b17ccfa6200f4d957866b7f8492b78ed0597762e58
MD5 72302496580c9f7a80e5a79ea59f78be
BLAKE2b-256 c073d41cff6555328847616f1e2ca0cf6b34ff3d8a85d8f9caee7df4e79990dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorisdk-2.1.1.tar.gz:

Publisher: release.yml on GibsonAI/memori

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memorisdk-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: memorisdk-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 217.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for memorisdk-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 227363edc9e853711cd0b0706a8aea883b83524c218a273279e16264abffc174
MD5 62406fc6599a62f7184e4b913b26dcf0
BLAKE2b-256 9bd708a238c770e18b972ac4aa5cd20e6fdb2f8f0a8c6075cf115f2df6416278

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorisdk-2.1.1-py3-none-any.whl:

Publisher: release.yml on GibsonAI/memori

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page