Skip to main content

llama-index memory mem0 integration

Project description

LlamaIndex Memory Integration: Mem0

Installation

To install the required package, run:

%pip install llama-index-memory-mem0

Setup with Mem0 Platform

  1. Set your Mem0 Platform API key as an environment variable. You can replace <your-mem0-api-key> with your actual API key:

Note: You can obtain your Mem0 Platform API key from the Mem0 Platform.

os.environ["MEM0_API_KEY"] = "<your-mem0-api-key>"
  1. Import the necessary modules and create a Mem0Memory instance:
from llama_index.memory.mem0 import Mem0Memory

context = {"user_id": "user_1"}
memory = Mem0Memory.from_client(
    context=context,
    api_key="<your-mem0-api-key>",
    search_msg_limit=4,  # optional, default is 5
)

Mem0 Context is used to identify the user, agent or the conversation in the Mem0. It is required to be passed in the at least one of the fields in the Mem0Memory constructor. It can be any of the following:

context = {
    "user_id": "user_1",
    "agent_id": "agent_1",
    "run_id": "run_1",
}

search_msg_limit is optional, default is 5. It is the number of messages from the chat history to be used for memory retrieval from Mem0. More number of messages will result in more context being used for retrieval but will also increase the retrieval time and might result in some unwanted results.

Setup with Mem0 OSS

  1. Set your Mem0 OSS by providing configuration details:

Note: To know more about Mem0 OSS, read Mem0 OSS Quickstart.

config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test_9",
            "host": "localhost",
            "port": 6333,
            "embedding_model_dims": 1536,  # Change this according to your local model's dimensions
        },
    },
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4o",
            "temperature": 0.2,
            "max_tokens": 1500,
        },
    },
    "embedder": {
        "provider": "openai",
        "config": {"model": "text-embedding-3-small"},
    },
    "version": "v1.1",
}
  1. Create a Mem0Memory instance:
memory = Mem0Memory.from_config(
    context=context,
    config=config,
    search_msg_limit=4,  # optional, default is 5
)

Basic Usage

Currently, Mem0 Memory is supported in agents and chat engines.

Initialize the LLM

import os
from llama_index.llms.openai import OpenAI

os.environ["OPENAI_API_KEY"] = "<your-openai-api-key>"
llm = OpenAI(model="gpt-4o")

SimpleChatEngine

from llama_index.core import SimpleChatEngine

chat_engine = SimpleChatEngine.from_defaults(
    llm=llm, memory=memory  # set you memory here
)

# Start the chat
response = chat_engine.chat("Hi, My name is Mayank")
print(response)

Initialize the tools

from llama_index.core.tools import FunctionTool


def call_fn(name: str):
    """Call the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Calling... {name}")


def email_fn(name: str):
    """Email the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Emailing... {name}")


call_tool = FunctionTool.from_defaults(fn=call_fn)
email_tool = FunctionTool.from_defaults(fn=email_fn)

FunctionAgent

from llama_index.core.agent.workflow import FunctionAgent

agent = FunctionAgent(
    tools=[call_tool, email_tool],
    llm=llm,
)

# Start the chat
response = await agent.run("Hi, My name is Mayank", memory=memory)
print(response)

ReActAgent

from llama_index.core.agent.workflow import ReActAgent

agent = ReActAgent(
    tools=[call_tool, email_tool],
    llm=llm,
)

# Start the chat
response = await agent.run("Hi, My name is Mayank", memory=memory)
print(response)

Note: For more examples refer to: Notebooks

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_memory_mem0-1.0.0.tar.gz (7.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_memory_mem0-1.0.0-py3-none-any.whl (7.0 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_memory_mem0-1.0.0.tar.gz.

File metadata

  • Download URL: llama_index_memory_mem0-1.0.0.tar.gz
  • Upload date:
  • Size: 7.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_memory_mem0-1.0.0.tar.gz
Algorithm Hash digest
SHA256 6dfd1892afc30e7a326214c15d4037a71344392ffeae71ea3377621fd4c5841f
MD5 8af4f96f568c40ef12875b3d4309ddba
BLAKE2b-256 e86e392c5d4b32fd81e1fc78191f78d4fb368450030eec9d969e95c8b4d38e3e

See more details on using hashes here.

File details

Details for the file llama_index_memory_mem0-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: llama_index_memory_mem0-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 7.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_memory_mem0-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9cd58036f8a795957089247dc1a0e93189174dc60d5db78435981730f1ffdfd7
MD5 63bc993b8646900a01c099c8526e0f85
BLAKE2b-256 5a492df011d177b2a838c725192c60249496e1910dcbf693e9fd119db56a6423

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page