Skip to main content

llama-index memory mem0 integration

Project description

LlamaIndex Memory Integration: Mem0

Installation

To install the required package, run:

%pip install llama-index-memory-mem0

Setup with Mem0 Platform

  1. Set your Mem0 Platform API key as an environment variable. You can replace <your-mem0-api-key> with your actual API key:

Note: You can obtain your Mem0 Platform API key from the Mem0 Platform.

os.environ["MEM0_API_KEY"] = "<your-mem0-api-key>"
  1. Import the necessary modules and create a Mem0Memory instance:
from llama_index.memory.mem0 import Mem0Memory

context = {"user_id": "user_1"}
memory = Mem0Memory.from_client(
    context=context,
    api_key="<your-mem0-api-key>",
    search_msg_limit=4,  # optional, default is 5
)

Mem0 Context is used to identify the user, agent or the conversation in the Mem0. It is required to be passed in the at least one of the fields in the Mem0Memory constructor. It can be any of the following:

context = {
    "user_id": "user_1",
    "agent_id": "agent_1",
    "run_id": "run_1",
}

search_msg_limit is optional, default is 5. It is the number of messages from the chat history to be used for memory retrieval from Mem0. More number of messages will result in more context being used for retrieval but will also increase the retrieval time and might result in some unwanted results.

Setup with Mem0 OSS

  1. Set your Mem0 OSS by providing configuration details:

Note: To know more about Mem0 OSS, read Mem0 OSS Quickstart.

config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test_9",
            "host": "localhost",
            "port": 6333,
            "embedding_model_dims": 1536,  # Change this according to your local model's dimensions
        },
    },
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4o",
            "temperature": 0.2,
            "max_tokens": 1500,
        },
    },
    "embedder": {
        "provider": "openai",
        "config": {"model": "text-embedding-3-small"},
    },
    "version": "v1.1",
}
  1. Create a Mem0Memory instance:
memory = Mem0Memory.from_config(
    context=context,
    config=config,
    search_msg_limit=4,  # optional, default is 5
)

Basic Usage

Currently, Mem0 Memory is supported in the SimpleChatEngine, FunctionCallingAgent and ReActAgent.

Intilaize the LLM

import os
from llama_index.llms.openai import OpenAI

os.environ["OPENAI_API_KEY"] = "<your-openai-api-key>"
llm = OpenAI(model="gpt-4o")

SimpleChatEngine

from llama_index.core import SimpleChatEngine

agent = SimpleChatEngine.from_defaults(
    llm=llm, memory=memory  # set you memory here
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)

Initialize the tools

from llama_index.core.tools import FunctionTool


def call_fn(name: str):
    """Call the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Calling... {name}")


def email_fn(name: str):
    """Email the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Emailing... {name}")


call_tool = FunctionTool.from_defaults(fn=call_fn)
email_tool = FunctionTool.from_defaults(fn=email_fn)

FunctionCallingAgent

from llama_index.core.agent import FunctionCallingAgent

agent = FunctionCallingAgent.from_tools(
    [call_tool, email_tool],
    llm=llm,
    memory=memory,
    verbose=True,
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)

ReActAgent

from llama_index.core.agent import ReActAgent

agent = ReActAgent.from_tools(
    [call_tool, email_tool],
    llm=llm,
    memory=memory,
    verbose=True,
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)

Note: For more examples refer to: Notebooks

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_memory_mem0-0.2.0.tar.gz (5.4 kB view details)

Uploaded Source

Built Distribution

llama_index_memory_mem0-0.2.0-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_memory_mem0-0.2.0.tar.gz.

File metadata

  • Download URL: llama_index_memory_mem0-0.2.0.tar.gz
  • Upload date:
  • Size: 5.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_memory_mem0-0.2.0.tar.gz
Algorithm Hash digest
SHA256 5b644027121ee72b059e14c3d0e44ad5cddd2f423d7057dac4318149c8f10e63
MD5 8a016f7753d32da6093b5ceec838f041
BLAKE2b-256 5bf51c0f4965b0bf50ea72e2cb458b3d4b59e982416b53ea632affa8c05ed774

See more details on using hashes here.

File details

Details for the file llama_index_memory_mem0-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_memory_mem0-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d2c81864d51e07bc700d01090fde8d7596633a6473543818661395045da7f1e8
MD5 eaa2759859e1b6fa4fa7d32ecd4dbcab
BLAKE2b-256 af8c2a980e4e5639f93d43420a3d4a32b09e8f642b17d17274462095fac9babe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page