Skip to main content

A specialized fork of Mem0 optimized for small, local Large Language Models (LLMs) running through Ollama.

Project description

Mem0llama

A specialized fork of Mem0 optimized for small local Large Language Models (LLMs) running through Ollama.

Overview

Mem0llama enhances the compatibility of Mem0's graph memory system with small local LLMs while preserving its integration with Qdrant (for vector storage/RAG) and Neo4j (for graph relationships). This implementation ensures structured and predictable outputs from LLMs using Ollama's format argument and Pydantic models.

Key Features

  • Structured Output Support: Implemented Pydantic models to standardize LLM outputs for reliable entity and relationship extraction
  • LLM Formatting Utilities: Created utilities for structured prompts and response parsing with robust error handling
  • Ollama Integration: Optimized for local LLMs running through Ollama with format parameter configuration
  • Neo4j Community Edition Support: Enhanced compatibility with Neo4j Community Edition
  • Preserved Functionality: Maintains all existing Mem0 features including graph memory, search, and retrieval mechanisms

Components

Core Enhancements

  1. Pydantic Models (models.py)

    • Standardized models for entities, relationships, and memory operations
    • Ensures consistent data structures for LLM outputs
  2. LLM Formatting Utilities (llm_formatter.py)

    • Functions to create structured prompts for various operations
    • Response parsing with comprehensive error handling
    • Ollama format parameter configuration
  3. Graph Memory Improvements (graph_memory.py)

    • Modified node retrieval and relationship establishment
    • Enhanced entity and relationship extraction
    • Added LLM configuration adaptation
  4. Bug Fixes

    • Fixed "relatationship" typo in multiple files
    • Improved error handling for inconsistent LLM responses

Getting Started

Prerequisites

  • Python 3.8+
  • Ollama (for local LLMs)
  • Neo4j (Community Edition compatible)
  • Qdrant (for vector storage)

Installation

Docker servers

qdrant
cd servers/qdrant
./qdrant_run_docker.bat
neo4j
cd servers/neo4j
./neo4j_run_docker.bat
pip install -r requirements.txt

Configuration

Create a .env file with the following variables:

LLM_BASE_URL=http://localhost:11434
LLM_API_KEY=
LLM_MODEL=llama3
EMBEDDER_MODEL=nomic-embed-text
QDRANT_HOST=localhost:6333
QDRANT_API_KEY=your_qdrant_api_key
NEO4J_URL=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_password

Basic Usage

from mem0 import Memory

# Initialize Mem0 client
memory_config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "mem0",
            "host": "localhost",
            "port": 6333,
            "embedding_model_dims": 1024,
            "api_key": qdrant_api_key,
        },
    },
    "llm": {
        "provider": "ollama",
        "config": {
            "ollama_base_url": "http://localhost:11434",
            "model": "llama3",
        }
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text",
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": "bolt://localhost:7687",
            "username": "neo4j",
            "password": "password",
        },
    },
}

memory = Memory.from_config(memory_config)

# Add memories
messages = [
    {"role": "user", "content": "I like to program in Python"},
    {"role": "assistant", "content": "That's great! Python is a versatile language."}
]
memory.add(messages, user_id="user123")

# Search memories
results = memory.search(query="What programming languages do I know?", user_id="user123")

Testing

Run the test script to validate the structured output capabilities:

python ./mem0llama_structured_output.py

For a more interactive experience, try the terminal chatbot:

python ./mem0llama_chat_cli.py

Add the --debug flag to see detailed memory and relation information:

python ./mem0llama_chat_cli.py --debug

License

This project is licensed under the same terms as the original Mem0 project.

Acknowledgements

  • Mem0 - The original memory layer for AI agents
  • Ollama - For running local LLMs
  • Neo4j - Graph database for relationship storage
  • Qdrant - Vector database for RAG

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem0llama-0.1.2.tar.gz (67.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem0llama-0.1.2-py3-none-any.whl (110.1 kB view details)

Uploaded Python 3

File details

Details for the file mem0llama-0.1.2.tar.gz.

File metadata

  • Download URL: mem0llama-0.1.2.tar.gz
  • Upload date:
  • Size: 67.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for mem0llama-0.1.2.tar.gz
Algorithm Hash digest
SHA256 c56a47310032bf2baf5efb1257b50adfaaaeb38e85ba70925312dc9230206fe1
MD5 853604200558f514d49a63f735f97c7e
BLAKE2b-256 00bf1ea2b74d702ba6a01452fcaff5442f654740bbd3a82f01f86af848441f61

See more details on using hashes here.

File details

Details for the file mem0llama-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: mem0llama-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 110.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for mem0llama-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f89f0bef6c1c46a946963140c90949f1c1ebbc7c04787c758713ebfbc7883d19
MD5 0c75141732b5f6d02df75086d9e4a98a
BLAKE2b-256 2bd1edf1bb43cd993d35e43c8a73627a8f0f24137de5207cf77f2b3deb565ea6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page