Memory-enabled AI assistant with local LLM support
Project description
๐ง Mem-LLM
Memory-enabled AI assistant with local LLM support
Mem-LLM is a powerful Python library that brings persistent memory capabilities to local Large Language Models. Build AI assistants that remember user interactions, manage knowledge bases, and work completely offline with Ollama.
โจ Key Features
- ๐ง Persistent Memory - Remembers conversations across sessions
- ๐ค Universal Ollama Support - Works with ALL Ollama models (Qwen3, DeepSeek, Llama3, Granite, etc.)
- ๐พ Dual Storage Modes - JSON (simple) or SQLite (advanced) memory backends
- ๐ Knowledge Base - Built-in FAQ/support system with categorized entries
- ๐ฏ Dynamic Prompts - Context-aware system prompts that adapt to active features
- ๐ฅ Multi-User Support - Separate memory spaces for different users
- ๐ง Memory Tools - Search, export, and manage stored memories
- ๐จ Flexible Configuration - Personal or business usage modes
- ๐ Production Ready - Comprehensive test suite with 34+ automated tests
- ๐ 100% Local & Private - No cloud dependencies, your data stays yours
๐ Quick Start
Installation
pip install mem-llm
Prerequisites
Install and start Ollama:
# Install Ollama (visit https://ollama.ai)
# Then pull a model
ollama pull granite4:tiny-h
# Start Ollama service
ollama serve
Basic Usage
from mem_llm import MemAgent
# Create an agent
agent = MemAgent(model="granite4:tiny-h")
# Set user and chat
agent.set_user("alice")
response = agent.chat("My name is Alice and I love Python!")
print(response)
# Memory persists across sessions
response = agent.chat("What's my name and what do I love?")
print(response) # Agent remembers: "Your name is Alice and you love Python!"
That's it! Just 5 lines of code to get started.
๐ Usage Examples
Multi-User Conversations
from mem_llm import MemAgent
agent = MemAgent()
# User 1
agent.set_user("alice")
agent.chat("I'm a Python developer")
# User 2
agent.set_user("bob")
agent.chat("I'm a JavaScript developer")
# Each user has separate memory
agent.set_user("alice")
response = agent.chat("What do I do?") # "You're a Python developer"
Advanced Configuration
from mem_llm import MemAgent
# Use SQL database with knowledge base
agent = MemAgent(
model="qwen3:8b",
use_sql=True,
load_knowledge_base=True,
config_file="config.yaml"
)
# Add knowledge base entry
agent.add_kb_entry(
category="FAQ",
question="What are your hours?",
answer="We're open 9 AM - 5 PM EST, Monday-Friday"
)
# Agent will use KB to answer
response = agent.chat("When are you open?")
Memory Tools
from mem_llm import MemAgent
agent = MemAgent(use_sql=True)
agent.set_user("alice")
# Chat with memory
agent.chat("I live in New York")
agent.chat("I work as a data scientist")
# Search memories
results = agent.search_memories("location")
print(results) # Finds "New York" memory
# Export all data
data = agent.export_user_data()
print(f"Total memories: {len(data['memories'])}")
# Get statistics
stats = agent.get_memory_stats()
print(f"Users: {stats['total_users']}, Memories: {stats['total_memories']}")
CLI Interface
# Interactive chat
mem-llm chat
# With specific model
mem-llm chat --model llama3:8b
# Customer service mode
mem-llm customer-service
# Knowledge base management
mem-llm kb add --category "FAQ" --question "How to install?" --answer "Run: pip install mem-llm"
mem-llm kb list
mem-llm kb search "install"
๐ฏ Usage Modes
Personal Mode (Default)
- Single user with JSON storage
- Simple and lightweight
- Perfect for personal projects
- No configuration needed
agent = MemAgent() # Automatically uses personal mode
Business Mode
- Multi-user with SQL database
- Knowledge base support
- Advanced memory tools
- Requires configuration file
agent = MemAgent(
config_file="config.yaml",
use_sql=True,
load_knowledge_base=True
)
๐ง Configuration
Create a config.yaml file for advanced features:
# Usage mode: 'personal' or 'business'
usage_mode: business
# LLM settings
llm:
model: granite4:tiny-h
base_url: http://localhost:11434
temperature: 0.7
max_tokens: 2000
# Memory settings
memory:
type: sql # or 'json'
db_path: ./data/memory.db
# Knowledge base
knowledge_base:
enabled: true
kb_path: ./data/knowledge_base.db
# Logging
logging:
level: INFO
file: logs/mem_llm.log
๐งช Supported Models
Mem-LLM works with ALL Ollama models, including:
- โ Thinking Models: Qwen3, DeepSeek, QwQ
- โ Standard Models: Llama3, Granite, Phi, Mistral
- โ Specialized Models: CodeLlama, Vicuna, Neural-Chat
- โ Any Custom Model in your Ollama library
Model Compatibility Features
- ๐ Automatic thinking mode detection
- ๐ฏ Dynamic prompt adaptation
- โก Token limit optimization (2000 tokens)
- ๐ง Automatic retry on empty responses
๐ Architecture
mem-llm/
โโโ mem_llm/
โ โโโ mem_agent.py # Main agent class
โ โโโ memory_manager.py # JSON memory backend
โ โโโ memory_db.py # SQL memory backend
โ โโโ llm_client.py # Ollama API client
โ โโโ knowledge_loader.py # Knowledge base system
โ โโโ dynamic_prompt.py # Context-aware prompts
โ โโโ memory_tools.py # Memory management tools
โ โโโ config_manager.py # Configuration handler
โ โโโ cli.py # Command-line interface
โโโ examples/ # Usage examples
๐ฅ Advanced Features
Dynamic Prompt System
Prevents hallucinations by only including instructions for enabled features:
agent = MemAgent(use_sql=True, load_knowledge_base=True)
# Agent automatically knows:
# โ
Knowledge Base is available
# โ
Memory tools are available
# โ
SQL storage is active
Knowledge Base Categories
Organize knowledge by category:
agent.add_kb_entry(category="FAQ", question="...", answer="...")
agent.add_kb_entry(category="Technical", question="...", answer="...")
agent.add_kb_entry(category="Billing", question="...", answer="...")
Memory Search & Export
Powerful memory management:
# Search across all memories
results = agent.search_memories("python", limit=5)
# Export everything
data = agent.export_user_data()
# Get insights
stats = agent.get_memory_stats()
๐ฆ Project Structure
Core Components
- MemAgent: Main interface for building AI assistants
- MemoryManager: JSON-based memory storage (simple)
- SQLMemoryManager: SQLite-based storage (advanced)
- OllamaClient: LLM communication handler
- KnowledgeLoader: Knowledge base management
Optional Features
- MemoryTools: Search, export, statistics
- ConfigManager: YAML configuration
- CLI: Command-line interface
๐งช Testing
Run the comprehensive test suite:
# Install dev dependencies
pip install -r requirements-dev.txt
# Run all tests (34+ automated tests)
cd tests
python run_all_tests.py
# Run specific test
python -m pytest test_mem_agent.py -v
Test Coverage
- โ Core imports and dependencies
- โ CLI functionality
- โ Ollama connection and models
- โ JSON memory operations
- โ SQL memory operations
- โ MemAgent features
- โ Configuration management
- โ Multi-user scenarios
- โ Hallucination detection
๐ Examples
The examples/ directory contains ready-to-run demonstrations:
- 01_hello_world.py - Simplest possible example (5 lines)
- 02_basic_memory.py - Memory persistence basics
- 03_multi_user.py - Multiple users with separate memories
- 04_customer_service.py - Real-world customer service scenario
- 05_knowledge_base.py - FAQ/support system
- 06_cli_demo.py - Command-line interface examples
- 07_document_config.py - Configuration from documents
๐ ๏ธ Development
Setup Development Environment
git clone https://github.com/emredeveloper/Mem-LLM.git
cd Mem-LLM
pip install -e .
pip install -r requirements-dev.txt
Running Tests
pytest tests/ -v --cov=mem_llm
Building Package
python -m build
twine upload dist/*
๐ Requirements
Core Dependencies
- Python 3.8+
- requests>=2.31.0
- pyyaml>=6.0.1
- click>=8.1.0
Optional Dependencies
- pytest>=7.4.0 (for testing)
- flask>=3.0.0 (for web interface)
- fastapi>=0.104.0 (for API server)
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ค Author
C. Emre Karataล
- Email: karatasqemre@gmail.com
- GitHub: @emredeveloper
๐ Acknowledgments
- Built with Ollama for local LLM support
- Inspired by the need for privacy-focused AI assistants
- Thanks to all contributors and users
๐ Project Status
- Version: 1.0.10
- Status: Beta (Production Ready)
- Last Updated: October 20, 2025
๐ Links
- PyPI: https://pypi.org/project/mem-llm/
- GitHub: https://github.com/emredeveloper/Mem-LLM
- Issues: https://github.com/emredeveloper/Mem-LLM/issues
- Documentation: See examples/ directory
๐ Roadmap
- Web UI dashboard
- REST API server
- Vector database integration
- Multi-language support
- Cloud backup options
- Advanced analytics
โญ If you find this project useful, please give it a star on GitHub!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem_llm-1.0.11.tar.gz.
File metadata
- Download URL: mem_llm-1.0.11.tar.gz
- Upload date:
- Size: 42.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f56b7802ae7568ca5504983c15022ed0567e0bb882636916fa4049da4f7b7a66
|
|
| MD5 |
cf295ba6d1611533062aeab2faf8b90f
|
|
| BLAKE2b-256 |
514bbfb7708d41a7dc1cb7267ca68c81ddb8af15753b2c484c99a61a986099e1
|
File details
Details for the file mem_llm-1.0.11-py3-none-any.whl.
File metadata
- Download URL: mem_llm-1.0.11-py3-none-any.whl
- Upload date:
- Size: 35.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c23d5243e3be0cf12b95e379b075aaab6ab76b75994d339d80ff83d8baad286f
|
|
| MD5 |
c37132419aa08f38ed1ecff21c79dbf6
|
|
| BLAKE2b-256 |
5cb29e6d72b9d1ff13a1bb26501e82e35ea3aa8330928fb779f69fa83f131a17
|