Skip to main content

A powerful Memory LLM library with Hierarchical Memory and Multi-Backend support

Project description

Mem-LLM

PyPI version Python 3.8+ License: MIT

Mem-LLM is a Python framework for building privacy-first, memory-enabled AI assistants that run 100% locally. The project combines persistent multi-user conversation history with optional knowledge bases, multiple storage backends, vector search capabilities, response quality metrics, and tight integration with Ollama and LM Studio so you can experiment locally and deploy production-ready workflows with quality monitoring and semantic understanding - completely private and offline.

๐Ÿ†• What's New in v2.3.2

๐Ÿ”ง System Intelligence

  • โœ… New Tools - get_system_info for hardware context and generate_random for secure IDs.
  • โœ… Optimized Dependencies - Added psutil for precise system resource monitoring.

๐ŸŽจ UI & UX Improvements

  • โœ… Copy to Clipboard - One-click copy for all chat messages and workflow logs.
  • โœ… Session Context - Status bar now displays the active User ID in real-time.
  • โœ… Modernized Components - Refined message bubbles and refined hover states.

๐Ÿ†• What's New in v2.3.0 - "Neural Nexus"

โš™๏ธ Agent Workflow Engine (NEW)

  • โœ… Structured Agents - Define multi-step workflows like "Deep Research" or "Content Creation".
  • โœ… Streaming UI - Real-time visualization of workflow steps as they execute.
  • โœ… Context Sharing - Data flows automatically between steps in a workflow.

๐Ÿ•ธ๏ธ Knowledge Graph Memory (NEW)

  • โœ… Graph Extraction - Automatically extracts entities and relationships from conversations.
  • โœ… Interactive Visualization - View your agent's knowledge graph in the new Web UI tab.
  • โœ… NetworkX Integration - Powerful graph operations and persistence.

๐ŸŽจ Premium Web UI (Redesigned)

  • โœ… Modern Aesthetics - Dark mode, glassmorphism, and responsive design.
  • โœ… New Features - File uploads (๐Ÿ“Ž) and Workflow Management tab.
  • โœ… LM Studio Integration - Auto-configuration for local models like gemma-3-4b.

What's New in v2.2.3

๐Ÿง  Hierarchical Memory System (NEW - Major Feature)

  • โœ… 4-Layer Cognitive Architecture - Episode, Trace, Category, and Domain layers
  • โœ… Auto-Categorization - Intelligent topic detection and classification
  • โœ… Context Injection - Smarter, more relevant context for LLMs
  • โœ… Backward Compatible - Works seamlessly with existing memory systems

What's New in v2.2.0

๐Ÿค– Multi-Agent Systems (NEW - Major Feature)

  • โœ… Collaborative AI Agents - Multiple specialized agents working together
  • โœ… BaseAgent - Role-based agents (Researcher, Analyst, Writer, Validator, Coordinator)
  • โœ… AgentRegistry - Centralized agent management and health monitoring
  • โœ… CommunicationHub - Thread-safe inter-agent messaging and broadcast channels
  • โœ… 29 New Tests - Comprehensive test coverage (84-98%)

What's New in v2.1.4

๐Ÿ“Š Conversation Analytics (NEW)

  • โœ… Deep Insights - Analyze user engagement, topics, and activity patterns
  • โœ… Visual Reports - Export analytics to JSON, CSV, or Markdown
  • โœ… Engagement Tracking - Monitor active days, session length, and interaction frequency

๐Ÿ“‹ Config Presets (NEW)

  • โœ… Instant Setup - Initialize specialized agents with one line of code
  • โœ… 8 Built-in Presets - chatbot, code_assistant, creative_writer, tutor, analyst, translator, summarizer, researcher
  • โœ… Custom Presets - Save and reuse your own agent configurations

What's New in v2.1.3

๐Ÿš€ Enhanced Tool Execution

  • โœ… Smart parser - Understands natural language tool calls

  • โœ… Better prompts - Clear DO/DON'T examples for LLM

  • โœ… More reliable - Tools execute even when LLM doesn't follow exact format

  • Function Calling (v2.0.0) โ€“ LLMs can call external Python functions

  • Memory-Aware Tools (v2.0.0) โ€“ Agents search their own conversation history

  • 18+ Built-in Tools (v2.0.0) โ€“ Math, text, file, utility, memory, and async tools

  • Custom Tools (v2.0.0) โ€“ Easy @tool decorator for your functions

  • Tool Chaining (v2.0.0) โ€“ Automatic multi-tool workflows

Core Features

  • 100% Local & Private (v1.3.6) โ€“ No cloud dependencies, all processing on your machine.
  • Streaming Response (v1.3.3+) โ€“ Real-time ChatGPT-style streaming for Ollama and LM Studio.
  • REST API Server (v1.3.3+) โ€“ FastAPI-based server with WebSocket and SSE streaming support.
  • Web UI (v1.3.3+) โ€“ Modern 3-page interface (Chat, Memory Management, Metrics Dashboard).
  • Persistent Memory โ€“ Store and recall conversation history across sessions for each user.
  • Multi-Backend Support (v1.3.0+) โ€“ Choose between Ollama and LM Studio with unified API.
  • Auto-Detection (v1.3.0+) โ€“ Automatically find and use available local LLM service.
  • Response Metrics (v1.3.1+) โ€“ Track confidence, latency, KB usage, and quality analytics.
  • Vector Search (v1.3.2+) โ€“ Semantic search with ChromaDB, cross-lingual support.
  • Flexible Storage โ€“ Choose between lightweight JSON files or a SQLite database for production scenarios.
  • Knowledge Bases โ€“ Load categorized Q&A content to augment model responses with authoritative answers.
  • Dynamic Prompting โ€“ Automatically adapts prompts based on the features you enable, reducing hallucinations.
  • CLI & Tools โ€“ Includes a command-line interface plus utilities for searching, exporting, and auditing stored memories.
  • Security Features (v1.1.0+) โ€“ Prompt injection detection with risk-level assessment (opt-in).
  • High Performance (v1.1.0+) โ€“ Thread-safe operations with 16K+ msg/s throughput, <1ms search latency.
  • Conversation Summarization (v1.2.0+) โ€“ Automatic token compression (~40-60% reduction).
  • Multi-Database Support (v1.2.0+) โ€“ Export/import to PostgreSQL, MongoDB, JSON, CSV, SQLite.

Repository Layout

  • Memory LLM/ โ€“ Core Python package (mem_llm), configuration examples, packaging metadata, and detailed module-level documentation.
  • examples/ โ€“ Sample scripts that demonstrate common usage patterns.
  • LICENSE โ€“ MIT license for the project.

Looking for API docs or more detailed examples? Start with Memory LLM/README.md, which includes extensive usage guides, configuration options, and advanced workflows.

Quick Start

1. Installation

pip install mem-llm

# Or with optional features
pip install mem-llm[databases]  # PostgreSQL + MongoDB
pip install mem-llm[postgresql]  # PostgreSQL only
pip install mem-llm[mongodb]     # MongoDB only

# Vector search support (v1.3.2+)
pip install chromadb sentence-transformers

2. Choose Your Backend

Option A: Ollama (Local, Free)

# Install Ollama from https://ollama.ai
ollama pull granite4:3b
ollama serve

Option B: LM Studio (Local, GUI)

# Download from https://lmstudio.ai
# Load a model and start server

3. Create and Chat

from mem_llm import MemAgent

# Option A: Ollama
agent = MemAgent(backend='ollama', model="granite4:3b")

# Option B: LM Studio
agent = MemAgent(backend='lmstudio', model="local-model")

# Option C: Auto-detect
agent = MemAgent(auto_detect_backend=True)

# Use it!
agent.set_user("alice")
print(agent.chat("My name is Alice and I love Python!"))
print(agent.chat("What do I love?"))  # Agent remembers!

# Streaming response (v1.3.3+)
for chunk in agent.chat_stream("Tell me a story"):
    print(chunk, end="", flush=True)

# NEW in v2.0.0: Function calling with tools
agent = MemAgent(enable_tools=True)
agent.set_user("alice")
agent.chat("Calculate (25 * 4) + 10")  # Uses built-in calculator
agent.chat("Search my memory for 'Python'")  # Uses memory tool

# NEW in v2.1.0: Async tools & validation
from mem_llm import tool

@tool(
    name="send_email",
    pattern={"email": r'^[\w\.-]+@[\w\.-]+\.\w+$'}  # Email validation
)
def send_email(email: str) -> str:
    return f"Email sent to {email}"

4. Web UI & REST API (v1.3.3+)

# Install with API support
pip install mem-llm[api]

# Start API server (serves Web UI automatically)
python -m mem_llm.api_server

# Or use dedicated launcher
mem-llm-web

# Access Web UI at:
# http://localhost:8000          - Chat interface
# http://localhost:8000/memory   - Memory management
# http://localhost:8000/metrics  - Metrics dashboard
# http://localhost:8000/docs     - API documentation

Multi-Backend Examples (v1.3.0+)

from mem_llm import MemAgent

# LM Studio - Fast local inference with GUI
agent = MemAgent(
    backend='lmstudio',
    model='local-model',
    base_url='http://localhost:1234'
)

# Auto-detect - Use any available local backend
agent = MemAgent(auto_detect_backend=True)

# Advanced features still work!
agent = MemAgent(
    backend='ollama',           # NEW in v1.3.0
    model="granite4:3b",
    use_sql=True,              # Thread-safe SQLite storage
    enable_security=True       # Prompt injection protection
)

For advanced configuration (SQL storage, knowledge base support, business mode, etc.), copy config.yaml.example from the package directory and adjust it for your environment.

Test Coverage (v2.1.1)

  • โœ… 20+ examples demonstrating all features
  • โœ… Function Calling (3 examples - basic, memory tools, async+validation)
  • โœ… Ollama and LM Studio backends (14 tests)
  • โœ… Conversation Summarization (5 tests)
  • โœ… Data Export/Import (11 tests - JSON, CSV, SQLite, PostgreSQL, MongoDB)
  • โœ… Core MemAgent functionality (5 tests)
  • โœ… Factory pattern and auto-detection (4 tests)

Performance

  • Write Throughput: 16,666+ records/sec
  • Search Latency: <1ms for 500+ conversations
  • Token Compression: 40-60% reduction with summarization (v1.2.0+)
  • Thread-Safe: Full RLock protection on all SQLite operations
  • Multi-Database: Seamless export/import across 5 formats (v1.2.0+)

Contributing

Contributions, bug reports, and feature requests are welcome! Please open an issue or submit a pull request describing your changes. Make sure to include test coverage and follow the formatting guidelines enforced by the existing codebase.

Links

License

Mem-LLM is released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem_llm-2.3.7.tar.gz (130.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem_llm-2.3.7-py3-none-any.whl (133.0 kB view details)

Uploaded Python 3

File details

Details for the file mem_llm-2.3.7.tar.gz.

File metadata

  • Download URL: mem_llm-2.3.7.tar.gz
  • Upload date:
  • Size: 130.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for mem_llm-2.3.7.tar.gz
Algorithm Hash digest
SHA256 8efa20155711d618d054a6266b5a4bf5fdc1282683128ed8b6a12ff5f56ff6fa
MD5 3a3f9da839085bc9dbc959044b053390
BLAKE2b-256 dad465816dc427f2f97aec73c4991fb5a256ca9c0daaa27406ca00ed2104bf2a

See more details on using hashes here.

File details

Details for the file mem_llm-2.3.7-py3-none-any.whl.

File metadata

  • Download URL: mem_llm-2.3.7-py3-none-any.whl
  • Upload date:
  • Size: 133.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for mem_llm-2.3.7-py3-none-any.whl
Algorithm Hash digest
SHA256 74419a57c407d67e1445764105ec617ed3aa70d5c7398d64b6c2d89c69791ccc
MD5 bca647886c077a66b0d6d75f807d39a3
BLAKE2b-256 13b9c8867c5ca353a2ef168b456ead6d274c43b1caedbd902533b6bb6e4671f9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page