Skip to main content

Long-term memory for AI Agents. Note, it's forked from opensource project Mem0, with our own database adaptations.

Project description

Mem0 - The Memory Layer for Personalized AI

mem0ai%2Fmem0 | Trendshift

Learn more · Join Discord · Demo · OpenMemory

Mem0 Discord Mem0 PyPI - Downloads GitHub commit activity Package version Npm package Y Combinator S24

📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →

⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens

🔥 Research Highlights

  • +26% Accuracy over OpenAI Memory on the LOCOMO benchmark
  • 91% Faster Responses than full-context, ensuring low-latency at scale
  • 90% Lower Token Usage than full-context, cutting costs without compromise
  • Read the full paper

Introduction

Mem0 ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.

Key Features & Use Cases

Core Capabilities:

  • Multi-Level Memory: Seamlessly retains User, Session, and Agent state with adaptive personalization
  • Developer-Friendly: Intuitive API, cross-platform SDKs, and a fully managed service option

Applications:

  • AI Assistants: Consistent, context-rich conversations
  • Customer Support: Recall past tickets and user history for tailored help
  • Healthcare: Track patient preferences and history for personalized care
  • Productivity & Gaming: Adaptive workflows and environments based on user behavior

🚀 Quickstart Guide

Choose between our hosted platform or self-hosted package:

Hosted Platform

Get up and running in minutes with automatic updates, analytics, and enterprise security.

  1. Sign up on Mem0 Platform
  2. Embed the memory layer via SDK or API keys

Self-Hosted (Open Source)

Install the sdk via pip:

pip install mem0ai

Install sdk via npm:

npm install mem0ai

Basic Usage

Mem0 requires an LLM to function, with gpt-4o-mini from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our Supported LLMs documentation.

First step is to instantiate the memory:

from openai import OpenAI
from mem0 import Memory

openai_client = OpenAI()
memory = Memory()

def chat_with_memories(message: str, user_id: str = "default_user") -> str:
    # Retrieve relevant memories
    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)
    memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])

    # Generate Assistant response
    system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
    messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
    response = openai_client.chat.completions.create(model="gpt-4o-mini", messages=messages)
    assistant_response = response.choices[0].message.content

    # Create new memories from the conversation
    messages.append({"role": "assistant", "content": assistant_response})
    memory.add(messages, user_id=user_id)

    return assistant_response

def main():
    print("Chat with AI (type 'exit' to quit)")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == 'exit':
            print("Goodbye!")
            break
        print(f"AI: {chat_with_memories(user_input)}")

if __name__ == "__main__":
    main()

For detailed integration steps, see the Quickstart and API Reference.

🔗 Integrations & Demos

  • ChatGPT with Memory: Personalized chat powered by Mem0 (Live Demo)
  • Browser Extension: Store memories across ChatGPT, Perplexity, and Claude (Chrome Extension)
  • Langgraph Support: Build a customer bot with Langgraph + Mem0 (Guide)
  • CrewAI Integration: Tailor CrewAI outputs with Mem0 (Example)

📚 Documentation & Support

Citation

We now have a paper you can cite:

@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}

⚖️ License

Apache 2.0 — see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem0_aliyun_nosql-0.1.6.tar.gz (134.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem0_aliyun_nosql-0.1.6-py3-none-any.whl (205.3 kB view details)

Uploaded Python 3

File details

Details for the file mem0_aliyun_nosql-0.1.6.tar.gz.

File metadata

  • Download URL: mem0_aliyun_nosql-0.1.6.tar.gz
  • Upload date:
  • Size: 134.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for mem0_aliyun_nosql-0.1.6.tar.gz
Algorithm Hash digest
SHA256 a1e6db151b9ea6c84546bd9b9687266854a70bb09272beae0a3075cc5983faea
MD5 ec6a7126fd2f1e6c9e9bdafa286d1d1e
BLAKE2b-256 cb4eae733eb3c0da0b80b2725a730bc730910e10ed97bf8cfecc8aa3a6d1e774

See more details on using hashes here.

File details

Details for the file mem0_aliyun_nosql-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for mem0_aliyun_nosql-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 63f000567c73d7b2cf871b6573f953050927dd09e03916b4c260fc8336a86e5d
MD5 69fe81530b2ed632fcb77fea79deda6d
BLAKE2b-256 b965d13eb1bb59108c3b6cf50aae07487b69e300bf6ec29c333b68b4fc94381b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page