Skip to main content

Memory-enabled AI assistant with local LLM support

Project description

🧠 mem-llm

Memory-enabled AI assistant that remembers conversations using local LLMs

Python PyPI License


🎯 What is it?

A lightweight Python library that adds persistent memory to local LLM chatbots. Each user gets their own conversation history that the AI remembers across sessions.

Perfect for:

  • 💬 Customer service chatbots
  • 🤖 Personal AI assistants
  • 📝 Context-aware applications
  • 🏢 Business automation

⚡ Quick Start

1. Install

pip install mem-llm

2. Setup Ollama (one-time)

# Install: https://ollama.ai/download
ollama serve

# Download model (only 2.5GB)
ollama pull granite4:tiny-h

3. Use

from mem_llm import MemAgent

# Create agent (one line!)
agent = MemAgent()

# Set user
agent.set_user("john")

# Chat - it remembers!
agent.chat("My name is John")
agent.chat("What's my name?")  # → "Your name is John"

💡 Features

Feature Description
🧠 Memory Remembers each user's conversation history
👥 Multi-user Separate memory for each user
🔒 Privacy 100% local, no cloud/API needed
Fast Lightweight SQLite/JSON storage
🎯 Simple 3 lines of code to get started

📖 Usage Examples

Basic Chat

from mem_llm import MemAgent

agent = MemAgent()
agent.set_user("alice")

# First conversation
agent.chat("I love pizza")

# Later...
agent.chat("What's my favorite food?")
# → "Your favorite food is pizza"

Customer Service Bot

agent = MemAgent()

# Customer 1
agent.set_user("customer_001")
agent.chat("My order #12345 is delayed")

# Customer 2 (different memory!)
agent.set_user("customer_002")
agent.chat("I want to return item #67890")

Check User Profile

# Get automatically extracted user info
profile = agent.get_user_profile()
# {'name': 'Alice', 'favorite_food': 'pizza', 'location': 'NYC'}

🔧 Configuration

JSON Memory (default - simple)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=False,  # Use JSON files
    memory_dir="memories"
)

SQL Memory (advanced - faster)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=True,  # Use SQLite
    memory_dir="memories.db"
)

Custom Settings

agent = MemAgent(
    model="llama2",  # Any Ollama model
    ollama_url="http://localhost:11434"
)

📚 API Reference

MemAgent

# Initialize
agent = MemAgent(model="granite4:tiny-h", use_sql=False)

# Set active user
agent.set_user(user_id: str, name: Optional[str] = None)

# Chat
response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str

# Get profile
profile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict

# System check
status = agent.check_setup() -> Dict

🎨 Advanced: PDF/DOCX Config

Generate config from business documents:

from mem_llm import create_config_from_document

# Create config.yaml from PDF
create_config_from_document(
    doc_path="company_info.pdf",
    output_path="config.yaml",
    company_name="Acme Corp"
)

# Use config
agent = MemAgent(config_file="config.yaml")

🔥 Models

Works with any Ollama model:

Model Size Speed Quality
granite4:tiny-h 2.5GB ⚡⚡⚡ ⭐⭐
llama2 4GB ⚡⚡ ⭐⭐⭐
mistral 4GB ⚡⚡ ⭐⭐⭐⭐
llama3 5GB ⭐⭐⭐⭐⭐
ollama pull <model-name>

📦 Requirements

  • Python 3.8+
  • Ollama (for LLM)
  • 4GB RAM minimum
  • 5GB disk space

Dependencies (auto-installed):

  • requests >= 2.31.0
  • pyyaml >= 6.0.1

🐛 Troubleshooting

Ollama not running?

ollama serve

Model not found?

ollama pull granite4:tiny-h

Import error?

pip install mem-llm --upgrade

📄 License

MIT License - feel free to use in personal and commercial projects!


🔗 Links


🌟 Star us on GitHub!

If you find this useful, give us a ⭐ on GitHub!


Made with ❤️ by C. Emre Karataş

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem_llm-1.0.4.tar.gz (44.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem_llm-1.0.4-py3-none-any.whl (27.3 kB view details)

Uploaded Python 3

File details

Details for the file mem_llm-1.0.4.tar.gz.

File metadata

  • Download URL: mem_llm-1.0.4.tar.gz
  • Upload date:
  • Size: 44.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.4.tar.gz
Algorithm Hash digest
SHA256 940447b803093afa0cb5e6ee38127b8ca53109848d42f9755164deea7975c6a5
MD5 80d1b6987876aa65f70eb93380aeaeeb
BLAKE2b-256 d6536641162693b7ea4d2182ccab950eb0c50258a1215b2c5a556f48f93cef2c

See more details on using hashes here.

File details

Details for the file mem_llm-1.0.4-py3-none-any.whl.

File metadata

  • Download URL: mem_llm-1.0.4-py3-none-any.whl
  • Upload date:
  • Size: 27.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 1e699a0d7d9ed16a7fd0407df3c36b1b7adcce10d582cae6dad0baf9295b7377
MD5 e8f0d5888e79c64c00b2ca97104a399a
BLAKE2b-256 26a87be8c911963b3634ae094803a8b6723235666503a1fb22a879412d500c78

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page