Skip to main content

Mem-LLM is a Python framework for building privacy-first, memory-enabled AI assistants that run 100% locally.

Project description

Mem-LLM

PyPI version Python 3.8+ License: MIT

Mem-LLM is a Python framework for building privacy-first, memory-enabled AI assistants that run 100% locally. The project combines persistent multi-user conversation history with optional knowledge bases, multiple storage backends, vector search capabilities, response quality metrics, and tight integration with Ollama and LM Studio so you can experiment locally and deploy production-ready workflows with quality monitoring and semantic understanding - completely private and offline.

๐Ÿ†• What's New in v2.4.0

๐Ÿ”ง Maintenance & Packaging

  • โœ… Release v2.4.0 โ€” Bumped package metadata and published to PyPI.
  • โœ… Author & Metadata Fixes โ€” Corrected author name and updated packaging metadata and descriptions.
  • โœ… Docs Refresh โ€” README updated for clarity and current features.

๐Ÿ”ง Other improvements

  • โœ… Compatibility โ€” Ensured Python support for 3.8+ and improved packaging workflows.
  • โœ… Cleanup โ€” Removed outdated local build artifacts from dist/ before publishing.

๐Ÿ†• What's New in v2.3.0 - "Neural Nexus"

โš™๏ธ Agent Workflow Engine (NEW)

  • โœ… Structured Agents - Define multi-step workflows like "Deep Research" or "Content Creation".
  • โœ… Streaming UI - Real-time visualization of workflow steps as they execute.
  • โœ… Context Sharing - Data flows automatically between steps in a workflow.

๐Ÿ•ธ๏ธ Knowledge Graph Memory (NEW)

  • โœ… Graph Extraction - Automatically extracts entities and relationships from conversations.
  • โœ… Interactive Visualization - View your agent's knowledge graph in the new Web UI tab.
  • โœ… NetworkX Integration - Powerful graph operations and persistence.

๐ŸŽจ Premium Web UI (Redesigned)

  • โœ… Modern Aesthetics - Dark mode, glassmorphism, and responsive design.
  • โœ… New Features - File uploads (๐Ÿ“Ž) and Workflow Management tab.
  • โœ… LM Studio Integration - Auto-configuration for local models like gemma-3-4b.

What's New in v2.2.3

๐Ÿง  Hierarchical Memory System (NEW - Major Feature)

  • โœ… 4-Layer Cognitive Architecture - Episode, Trace, Category, and Domain layers
  • โœ… Auto-Categorization - Intelligent topic detection and classification
  • โœ… Context Injection - Smarter, more relevant context for LLMs
  • โœ… Backward Compatible - Works seamlessly with existing memory systems

What's New in v2.2.0

๐Ÿค– Multi-Agent Systems (NEW - Major Feature)

  • โœ… Collaborative AI Agents - Multiple specialized agents working together
  • โœ… BaseAgent - Role-based agents (Researcher, Analyst, Writer, Validator, Coordinator)
  • โœ… AgentRegistry - Centralized agent management and health monitoring
  • โœ… CommunicationHub - Thread-safe inter-agent messaging and broadcast channels
  • โœ… 29 New Tests - Comprehensive test coverage (84-98%)

What's New in v2.1.4

๐Ÿ“Š Conversation Analytics (NEW)

  • โœ… Deep Insights - Analyze user engagement, topics, and activity patterns
  • โœ… Visual Reports - Export analytics to JSON, CSV, or Markdown
  • โœ… Engagement Tracking - Monitor active days, session length, and interaction frequency

๐Ÿ“‹ Config Presets (NEW)

  • โœ… Instant Setup - Initialize specialized agents with one line of code
  • โœ… 8 Built-in Presets - chatbot, code_assistant, creative_writer, tutor, analyst, translator, summarizer, researcher
  • โœ… Custom Presets - Save and reuse your own agent configurations

What's New in v2.1.3

๐Ÿš€ Enhanced Tool Execution

  • โœ… Smart parser - Understands natural language tool calls

  • โœ… Better prompts - Clear DO/DON'T examples for LLM

  • โœ… More reliable - Tools execute even when LLM doesn't follow exact format

  • Function Calling (v2.0.0) โ€“ LLMs can call external Python functions

  • Memory-Aware Tools (v2.0.0) โ€“ Agents search their own conversation history

  • 18+ Built-in Tools (v2.0.0) โ€“ Math, text, file, utility, memory, and async tools

  • Custom Tools (v2.0.0) โ€“ Easy @tool decorator for your functions

  • Tool Chaining (v2.0.0) โ€“ Automatic multi-tool workflows

Core Features

  • 100% Local & Private (v1.3.6) โ€“ No cloud dependencies, all processing on your machine.
  • Streaming Response (v1.3.3+) โ€“ Real-time ChatGPT-style streaming for Ollama and LM Studio.
  • REST API Server (v1.3.3+) โ€“ FastAPI-based server with WebSocket and SSE streaming support.
  • Web UI (v1.3.3+) โ€“ Modern 3-page interface (Chat, Memory Management, Metrics Dashboard).
  • Persistent Memory โ€“ Store and recall conversation history across sessions for each user.
  • Multi-Backend Support (v1.3.0+) โ€“ Choose between Ollama and LM Studio with unified API.
  • Auto-Detection (v1.3.0+) โ€“ Automatically find and use available local LLM service.
  • Response Metrics (v1.3.1+) โ€“ Track confidence, latency, KB usage, and quality analytics.
  • Vector Search (v1.3.2+) โ€“ Semantic search with ChromaDB, cross-lingual support.
  • Flexible Storage โ€“ Choose between lightweight JSON files or a SQLite database for production scenarios.
  • Knowledge Bases โ€“ Load categorized Q&A content to augment model responses with authoritative answers.
  • Dynamic Prompting โ€“ Automatically adapts prompts based on the features you enable, reducing hallucinations.
  • CLI & Tools โ€“ Includes a command-line interface plus utilities for searching, exporting, and auditing stored memories.
  • Security Features (v1.1.0+) โ€“ Prompt injection detection with risk-level assessment (opt-in).
  • High Performance (v1.1.0+) โ€“ Thread-safe operations with 16K+ msg/s throughput, <1ms search latency.
  • Conversation Summarization (v1.2.0+) โ€“ Automatic token compression (~40-60% reduction).
  • Multi-Database Support (v1.2.0+) โ€“ Export/import to PostgreSQL, MongoDB, JSON, CSV, SQLite.

Repository Layout

  • Memory LLM/ โ€“ Core Python package (mem_llm), configuration examples, packaging metadata, and detailed module-level documentation.
  • examples/ โ€“ Sample scripts that demonstrate common usage patterns.
  • LICENSE โ€“ MIT license for the project.

Looking for API docs or more detailed examples? Start with Memory LLM/README.md, which includes extensive usage guides, configuration options, and advanced workflows.

Quick Start

1. Installation

pip install mem-llm

# Or with optional features
pip install mem-llm[databases]  # PostgreSQL + MongoDB
pip install mem-llm[postgresql]  # PostgreSQL only
pip install mem-llm[mongodb]     # MongoDB only

# Vector search support (v1.3.2+)
pip install chromadb sentence-transformers

2. Choose Your Backend

Option A: Ollama (Local, Free)

# Install Ollama from https://ollama.ai
ollama pull granite4:3b
ollama serve

Option B: LM Studio (Local, GUI)

# Download from https://lmstudio.ai
# Load a model and start server

3. Create and Chat

from mem_llm import MemAgent

# Option A: Ollama
agent = MemAgent(backend='ollama', model="granite4:3b")

# Option B: LM Studio
agent = MemAgent(backend='lmstudio', model="local-model")

# Option C: Auto-detect
agent = MemAgent(auto_detect_backend=True)

# Use it!
agent.set_user("alice")
print(agent.chat("My name is Alice and I love Python!"))
print(agent.chat("What do I love?"))  # Agent remembers!

# Streaming response (v1.3.3+)
for chunk in agent.chat_stream("Tell me a story"):
    print(chunk, end="", flush=True)

# NEW in v2.0.0: Function calling with tools
agent = MemAgent(enable_tools=True)
agent.set_user("alice")
agent.chat("Calculate (25 * 4) + 10")  # Uses built-in calculator
agent.chat("Search my memory for 'Python'")  # Uses memory tool

# NEW in v2.1.0: Async tools & validation
from mem_llm import tool

@tool(
    name="send_email",
    pattern={"email": r'^[\w\.-]+@[\w\.-]+\.\w+$'}  # Email validation
)
def send_email(email: str) -> str:
    return f"Email sent to {email}"

4. Web UI & REST API (v1.3.3+)

# Install with API support
pip install mem-llm[api]

# Start API server (serves Web UI automatically)
python -m mem_llm.api_server

# Or use dedicated launcher
mem-llm-web

# Access Web UI at:
# http://localhost:8000          - Chat interface
# http://localhost:8000/memory   - Memory management
# http://localhost:8000/metrics  - Metrics dashboard
# http://localhost:8000/docs     - API documentation

Multi-Backend Examples (v1.3.0+)

from mem_llm import MemAgent

# LM Studio - Fast local inference with GUI
agent = MemAgent(
    backend='lmstudio',
    model='local-model',
    base_url='http://localhost:1234'
)

# Auto-detect - Use any available local backend
agent = MemAgent(auto_detect_backend=True)

# Advanced features still work!
agent = MemAgent(
    backend='ollama',           # NEW in v1.3.0
    model="granite4:3b",
    use_sql=True,              # Thread-safe SQLite storage
    enable_security=True       # Prompt injection protection
)

For advanced configuration (SQL storage, knowledge base support, business mode, etc.), copy config.yaml.example from the package directory and adjust it for your environment.

Test Coverage (v2.1.1)

  • โœ… 20+ examples demonstrating all features
  • โœ… Function Calling (3 examples - basic, memory tools, async+validation)
  • โœ… Ollama and LM Studio backends (14 tests)
  • โœ… Conversation Summarization (5 tests)
  • โœ… Data Export/Import (11 tests - JSON, CSV, SQLite, PostgreSQL, MongoDB)
  • โœ… Core MemAgent functionality (5 tests)
  • โœ… Factory pattern and auto-detection (4 tests)

Performance

  • Write Throughput: 16,666+ records/sec
  • Search Latency: <1ms for 500+ conversations
  • Token Compression: 40-60% reduction with summarization (v1.2.0+)
  • Thread-Safe: Full RLock protection on all SQLite operations
  • Multi-Database: Seamless export/import across 5 formats (v1.2.0+)

Contributing

Contributions, bug reports, and feature requests are welcome! Please open an issue or submit a pull request describing your changes. Make sure to include test coverage and follow the formatting guidelines enforced by the existing codebase.

Links

License

Mem-LLM is released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem_llm-2.4.0.tar.gz (130.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem_llm-2.4.0-py3-none-any.whl (133.0 kB view details)

Uploaded Python 3

File details

Details for the file mem_llm-2.4.0.tar.gz.

File metadata

  • Download URL: mem_llm-2.4.0.tar.gz
  • Upload date:
  • Size: 130.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for mem_llm-2.4.0.tar.gz
Algorithm Hash digest
SHA256 32595423bfa75a6c20c9c5cae3142664154c62f56aadc4016f2b196ed9490086
MD5 c6b689a104a632e47185dfcbd8387784
BLAKE2b-256 833bcecf1816b1b6827765555f3ba6a6ebb43e747e3b515eb63f16cf1b515260

See more details on using hashes here.

File details

Details for the file mem_llm-2.4.0-py3-none-any.whl.

File metadata

  • Download URL: mem_llm-2.4.0-py3-none-any.whl
  • Upload date:
  • Size: 133.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for mem_llm-2.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 303380d174372870e5ef07f166c54fa4fc94f4c3e5f28a70293e1ea1ede759c2
MD5 0b0283a2a97709c5e0fe195f1fe5730e
BLAKE2b-256 21e8dbac5342425944ea1a9108dfbe3e0293f04e2b1083ed2c23a14b5acaed0c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page