Skip to main content

The Open-Source Memory Layer for AI Agents & Multi-Agent Systems

Project description

GibsonAI

memori

Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.

Learn more ยท Join Discord

PyPI version Downloads License: MIT Python 3.8+


๐ŸŽฏ Philosophy

  • Second-memory for all your LLM work - Never repeat context again
  • Dual-mode memory injection - Conscious short-term memory + Auto intelligent search
  • Flexible database connections - SQLite, PostgreSQL, MySQL support
  • Pydantic-based intelligence - Structured memory processing with validation
  • Simple, reliable architecture - Just works out of the box

โšก Quick Start

Install Memori:

pip install memorisdk

Example with LiteLLM

  1. Install LiteLLM:
pip install litellm
  1. Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
  1. Run this Python script:
from memori import Memori
from litellm import completion

# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()

print("=== First Conversation - Establishing Context ===")
response1 = completion(
    model="gpt-4o-mini",
    messages=[{
        "role": "user", 
        "content": "I'm working on a Python FastAPI project"
    }]
)

print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")

response2 = completion(
    model="gpt-4o-mini", 
    messages=[{
        "role": "user",
        "content": "Help me add user authentication"
    }]
)
print("Assistant:", response2.choices[0].message.content)
print("\n๐Ÿ’ก Notice: Memori automatically knows about your FastAPI Python project!")

๐Ÿš€ Ready to explore more?


๐Ÿง  How It Works

1. Universal Recording

office_work.enable()  # Records ALL LLM conversations automatically

2. Intelligent Processing

  • Entity Extraction: Extracts people, technologies, projects
  • Smart Categorization: Facts, preferences, skills, rules
  • Pydantic Validation: Structured, type-safe memory storage

3. Dual Memory Modes

๐Ÿง  Conscious Mode - Short-Term Working Memory

conscious_ingest=True  # One-shot short-term memory injection
  • At Startup: Conscious agent analyzes long-term memory patterns
  • Memory Promotion: Moves essential conversations to short-term storage
  • One-Shot Injection: Injects working memory once at conversation start
  • Like Human Short-Term Memory: Names, current projects, preferences readily available

๐Ÿ” Auto Mode - Dynamic Database Search

auto_ingest=True  # Continuous intelligent memory retrieval
  • Every LLM Call: Retrieval agent analyzes user query intelligently
  • Full Database Search: Searches through entire memory database
  • Context-Aware: Injects relevant memories based on current conversation
  • Performance Optimized: Caching, async processing, background threads

๐Ÿง  Memory Modes Explained

Conscious Mode - Short-Term Working Memory

# Mimics human conscious memory - essential info readily available
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,  # ๐Ÿง  Short-term working memory
    openai_api_key="sk-..."
)

How Conscious Mode Works:

  1. At Startup: Conscious agent analyzes long-term memory patterns
  2. Essential Selection: Promotes 5-10 most important conversations to short-term
  3. One-Shot Injection: Injects this working memory once at conversation start
  4. No Repeats: Won't inject again during the same session

Auto Mode - Dynamic Intelligent Search

# Searches entire database dynamically based on user queries
memori = Memori(
    database_connect="sqlite:///my_memory.db", 
    auto_ingest=True,  # ๐Ÿ” Smart database search
    openai_api_key="sk-..."
)

How Auto Mode Works:

  1. Every LLM Call: Retrieval agent analyzes user input
  2. Query Planning: Uses AI to understand what memories are needed
  3. Smart Search: Searches through entire database (short-term + long-term)
  4. Context Injection: Injects 3-5 most relevant memories per call

Combined Mode - Best of Both Worlds

# Get both working memory AND dynamic search
memori = Memori(
    conscious_ingest=True,  # Working memory once
    auto_ingest=True,       # Dynamic search every call
    openai_api_key="sk-..."
)

Intelligence Layers:

  1. Memory Agent - Processes every conversation with Pydantic structured outputs
  2. Conscious Agent - Analyzes patterns, promotes long-term โ†’ short-term memories
  3. Retrieval Agent - Intelligently searches and selects relevant context

What gets prioritized in Conscious Mode:

  • ๐Ÿ‘ค Personal Identity: Your name, role, location, basic info
  • โค๏ธ Preferences & Habits: What you like, work patterns, routines
  • ๐Ÿ› ๏ธ Skills & Tools: Technologies you use, expertise areas
  • ๐Ÿ“Š Current Projects: Ongoing work, learning goals
  • ๐Ÿค Relationships: Important people, colleagues, connections
  • ๐Ÿ”„ Repeated References: Information you mention frequently

๐Ÿ—„๏ธ Memory Types

Type Purpose Example Auto-Promoted
Facts Objective information "I use PostgreSQL for databases" โœ… High frequency
Preferences User choices "I prefer clean, readable code" โœ… Personal identity
Skills Abilities & knowledge "Experienced with FastAPI" โœ… Expertise areas
Rules Constraints & guidelines "Always write tests first" โœ… Work patterns
Context Session information "Working on e-commerce project" โœ… Current projects

๐Ÿ”ง Configuration

Simple Setup

from memori import Memori

# Conscious mode - Short-term working memory
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    template="basic", 
    conscious_ingest=True,  # One-shot context injection
    openai_api_key="sk-..."
)

# Auto mode - Dynamic database search
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    auto_ingest=True,  # Continuous memory retrieval
    openai_api_key="sk-..."
)

# Combined mode - Best of both worlds
memori = Memori(
    conscious_ingest=True,  # Working memory + 
    auto_ingest=True,       # Dynamic search
    openai_api_key="sk-..."
)

Advanced Configuration

from memori import Memori, ConfigManager

# Load from memori.json or environment
config = ConfigManager()
config.auto_load()

memori = Memori()
memori.enable()

Create memori.json:

{
  "database": {
    "connection_string": "postgresql://user:pass@localhost/memori"
  },
  "agents": {
    "openai_api_key": "sk-...",
    "conscious_ingest": true,
    "auto_ingest": false
  },
  "memory": {
    "namespace": "my_project",
    "retention_policy": "30_days"
  }
}

๐Ÿ”Œ Universal Integration

Works with ANY LLM library:

memori.enable()  # Enable universal recording

# LiteLLM (recommended)
from litellm import completion
completion(model="gpt-4", messages=[...])

# OpenAI
import openai
client = openai.OpenAI()
client.chat.completions.create(...)

# Anthropic  
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)

# All automatically recorded and contextualized!

๐Ÿ› ๏ธ Memory Management

Automatic Background Analysis

# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable()  # Starts background conscious agent

# Manual analysis trigger
memori.trigger_conscious_analysis()

# Get essential conversations
essential = memori.get_essential_conversations(limit=5)

Memory Retrieval Tools

from memori.tools import create_memory_tool

# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)

# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)

Context Control

# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories

# Search by category
skills = memori.search_memories_by_category("skill", limit=10)

# Get memory statistics
stats = memori.get_memory_stats()

๐Ÿ“‹ Database Schema

-- Core tables created automatically
chat_history        # All conversations
short_term_memory   # Recent context (expires)
long_term_memory    # Permanent insights  
rules_memory        # User preferences
memory_entities     # Extracted entities
memory_relationships # Entity connections

๐Ÿ“ Project Structure

memori/
โ”œโ”€โ”€ core/           # Main Memori class, database manager
โ”œโ”€โ”€ agents/         # Memory processing with Pydantic  
โ”œโ”€โ”€ database/       # SQLite/PostgreSQL/MySQL support
โ”œโ”€โ”€ integrations/   # LiteLLM, OpenAI, Anthropic
โ”œโ”€โ”€ config/         # Configuration management
โ”œโ”€โ”€ utils/          # Helpers, validation, logging
โ””โ”€โ”€ tools/          # Memory search tools

Examples

Framework Integrations

Memori works seamlessly with popular AI frameworks:

Framework Description Example Features
๐Ÿค– Agno Memory-enhanced agent framework integration with persistent conversations Simple chat agent with memory search Memory tools, conversation persistence, contextual responses
๐Ÿ‘ฅ CrewAI Multi-agent system with shared memory across agent interactions Collaborative agents with memory Agent coordination, shared memory, task-based workflows
๐ŸŒŠ Digital Ocean AI Memory-enhanced customer support using Digital Ocean's AI platform Customer support assistant with conversation history Context injection, session continuity, support analytics
๐Ÿ”— LangChain Enterprise-grade agent framework with advanced memory integration AI assistant with LangChain tools and memory Custom tools, agent executors, memory persistence, error handling
๐Ÿš€ Swarms Multi-agent system framework with persistent memory capabilities Memory-enhanced Swarms agents with auto/conscious ingestion Agent memory persistence, multi-agent coordination, contextual awareness

Interactive Demos

Explore Memori's capabilities through these interactive demonstrations:

Title Description Tools Used Live Demo
๐ŸŒŸ Personal Diary Assistant A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. Streamlit, LiteLLM, OpenAI, SQLite Run Demo
๐ŸŒ Travel Planner Agent Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. CrewAI, Streamlit, OpenAI, SQLite
๐Ÿง‘โ€๐Ÿ”ฌ Researcher Agent Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. Agno, Streamlit, OpenAI, ExaAI, SQLite Run Demo

๐Ÿค Contributing

๐Ÿ“„ License

MIT License - see LICENSE for details.


Made for developers who want their AI agents to remember and learn

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memorisdk-1.0.2.tar.gz (77.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memorisdk-1.0.2-py3-none-any.whl (86.2 kB view details)

Uploaded Python 3

File details

Details for the file memorisdk-1.0.2.tar.gz.

File metadata

  • Download URL: memorisdk-1.0.2.tar.gz
  • Upload date:
  • Size: 77.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for memorisdk-1.0.2.tar.gz
Algorithm Hash digest
SHA256 5fbc2ac21e6f0239e381445b43acc12a0b85d7301aee0c811bbfde87bcb63701
MD5 3e9680b608f759a9babfd515484a41f4
BLAKE2b-256 8e9829fef1b525774d4a735fc93ff807ba02ea5dcbf2f443e71d0d77cf899670

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorisdk-1.0.2.tar.gz:

Publisher: release.yml on GibsonAI/memori

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memorisdk-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: memorisdk-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 86.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for memorisdk-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 67dd5c267a3daed4692bd44fcb455fe0c6acbb6a257d5f8da7f100cb8e9e206e
MD5 aea93b22d4f7361517e7e1c12b46fcda
BLAKE2b-256 7db0b850986ede6dcb4431c9d1927870e7bda38c4ada87e063a3960f38f79e23

See more details on using hashes here.

Provenance

The following attestation bundles were made for memorisdk-1.0.2-py3-none-any.whl:

Publisher: release.yml on GibsonAI/memori

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page