Memory-enabled AI assistant with local LLM support
Project description
🧠 mem-llm
Memory-enabled AI assistant that remembers conversations using local LLMs
🎯 What is it?
A lightweight Python library that adds persistent memory to local LLM chatbots. Each user gets their own conversation history that the AI remembers across sessions.
Perfect for:
- 💬 Customer service chatbots
- 🤖 Personal AI assistants
- 📝 Context-aware applications
- 🏢 Business automation
⚡ Quick Start
1. Install
pip install mem-llm
2. Setup Ollama (one-time)
# Install: https://ollama.ai/download
ollama serve
# Download model (only 2.5GB)
ollama pull granite4:tiny-h
3. Use
from mem_llm import MemAgent
# Create agent (one line!)
agent = MemAgent()
# Set user
agent.set_user("john")
# Chat - it remembers!
agent.chat("My name is John")
agent.chat("What's my name?") # → "Your name is John"
💡 Features
| Feature | Description |
|---|---|
| 🧠 Memory | Remembers each user's conversation history |
| 👥 Multi-user | Separate memory for each user |
| 🔒 Privacy | 100% local, no cloud/API needed |
| ⚡ Fast | Lightweight SQLite/JSON storage |
| 🎯 Simple | 3 lines of code to get started |
📖 Usage Examples
Basic Chat
from mem_llm import MemAgent
agent = MemAgent()
agent.set_user("alice")
# First conversation
agent.chat("I love pizza")
# Later...
agent.chat("What's my favorite food?")
# → "Your favorite food is pizza"
Customer Service Bot
agent = MemAgent()
# Customer 1
agent.set_user("customer_001")
agent.chat("My order #12345 is delayed")
# Customer 2 (different memory!)
agent.set_user("customer_002")
agent.chat("I want to return item #67890")
Check User Profile
# Get automatically extracted user info
profile = agent.get_user_profile()
# {'name': 'Alice', 'favorite_food': 'pizza', 'location': 'NYC'}
🔧 Configuration
JSON Memory (default - simple)
agent = MemAgent(
model="granite4:tiny-h",
use_sql=False, # Use JSON files
memory_dir="memories"
)
SQL Memory (advanced - faster)
agent = MemAgent(
model="granite4:tiny-h",
use_sql=True, # Use SQLite
memory_dir="memories.db"
)
Custom Settings
agent = MemAgent(
model="llama2", # Any Ollama model
ollama_url="http://localhost:11434"
)
📚 API Reference
MemAgent
# Initialize
agent = MemAgent(model="granite4:tiny-h", use_sql=False)
# Set active user
agent.set_user(user_id: str, name: Optional[str] = None)
# Chat
response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str
# Get profile
profile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict
# System check
status = agent.check_setup() -> Dict
🎨 Advanced: PDF/DOCX Config
Generate config from business documents:
from mem_llm import create_config_from_document
# Create config.yaml from PDF
create_config_from_document(
doc_path="company_info.pdf",
output_path="config.yaml",
company_name="Acme Corp"
)
# Use config
agent = MemAgent(config_file="config.yaml")
🔥 Models
Works with any Ollama model:
| Model | Size | Speed | Quality |
|---|---|---|---|
granite4:tiny-h |
2.5GB | ⚡⚡⚡ | ⭐⭐ |
llama2 |
4GB | ⚡⚡ | ⭐⭐⭐ |
mistral |
4GB | ⚡⚡ | ⭐⭐⭐⭐ |
llama3 |
5GB | ⚡ | ⭐⭐⭐⭐⭐ |
ollama pull <model-name>
📦 Requirements
- Python 3.8+
- Ollama (for LLM)
- 4GB RAM minimum
- 5GB disk space
Dependencies (auto-installed):
requests >= 2.31.0pyyaml >= 6.0.1
🐛 Troubleshooting
Ollama not running?
ollama serve
Model not found?
ollama pull granite4:tiny-h
Import error?
pip install mem-llm --upgrade
📄 License
MIT License - feel free to use in personal and commercial projects!
🔗 Links
- PyPI: https://pypi.org/project/mem-llm/
- GitHub: https://github.com/emredeveloper/Mem-LLM
- Ollama: https://ollama.ai/
🌟 Star us on GitHub!
If you find this useful, give us a ⭐ on GitHub!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mem_llm-1.0.5.tar.gz.
File metadata
- Download URL: mem_llm-1.0.5.tar.gz
- Upload date:
- Size: 44.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9435ad83181f986b420edba29c80b1d535fd8156d5466701924c349c4dec2c58
|
|
| MD5 |
cabc694419f89cf8ef697a3e99058d52
|
|
| BLAKE2b-256 |
7cc198bf5e7081da52561ee9f019ff2c75a63139f065548eff77bdca9f59c68a
|
File details
Details for the file mem_llm-1.0.5-py3-none-any.whl.
File metadata
- Download URL: mem_llm-1.0.5-py3-none-any.whl
- Upload date:
- Size: 27.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c4d9bf89806a10510ec7199d4eb55bca6ad3621dac9f0ff051caad325a95d3f6
|
|
| MD5 |
ed04eb966e9090a5a04053d8b051f4a1
|
|
| BLAKE2b-256 |
ecd89bc337aca60cbc990e99648073e5fe4753a1277f1e4454ada809b03a4099
|