Skip to main content

Memory-enabled AI assistant with local LLM support

Project description

🧠 mem-llm

Memory-enabled AI assistant that remembers conversations using local LLMs

Python PyPI License


🎯 What is it?

A lightweight Python library that adds persistent memory to local LLM chatbots. Each user gets their own conversation history that the AI remembers across sessions.

Perfect for:

  • 💬 Customer service chatbots
  • 🤖 Personal AI assistants
  • 📝 Context-aware applications
  • 🏢 Business automation

⚡ Quick Start

1. Install

pip install mem-llm

2. Setup Ollama (one-time)

# Install: https://ollama.ai/download
ollama serve

# Download model (only 2.5GB)
ollama pull granite4:tiny-h

3. Use

from mem_llm import MemAgent

# Create agent (one line!)
agent = MemAgent()

# Set user
agent.set_user("john")

# Chat - it remembers!
agent.chat("My name is John")
agent.chat("What's my name?")  # → "Your name is John"

💡 Features

Feature Description
🧠 Memory Remembers each user's conversation history
👥 Multi-user Separate memory for each user
🔒 Privacy 100% local, no cloud/API needed
Fast Lightweight SQLite/JSON storage
🎯 Simple 3 lines of code to get started

📖 Usage Examples

Basic Chat

from mem_llm import MemAgent

agent = MemAgent()
agent.set_user("alice")

# First conversation
agent.chat("I love pizza")

# Later...
agent.chat("What's my favorite food?")
# → "Your favorite food is pizza"

Customer Service Bot

agent = MemAgent()

# Customer 1
agent.set_user("customer_001")
agent.chat("My order #12345 is delayed")

# Customer 2 (different memory!)
agent.set_user("customer_002")
agent.chat("I want to return item #67890")

Check User Profile

# Get automatically extracted user info
profile = agent.get_user_profile()
# {'name': 'Alice', 'favorite_food': 'pizza', 'location': 'NYC'}

🔧 Configuration

JSON Memory (default - simple)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=False,  # Use JSON files
    memory_dir="memories"
)

SQL Memory (advanced - faster)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=True,  # Use SQLite
    memory_dir="memories.db"
)

Custom Settings

agent = MemAgent(
    model="llama2",  # Any Ollama model
    ollama_url="http://localhost:11434"
)

📚 API Reference

MemAgent

# Initialize
agent = MemAgent(model="granite4:tiny-h", use_sql=False)

# Set active user
agent.set_user(user_id: str, name: Optional[str] = None)

# Chat
response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str

# Get profile
profile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict

# System check
status = agent.check_setup() -> Dict

🎨 Advanced: PDF/DOCX Config

Generate config from business documents:

from mem_llm import create_config_from_document

# Create config.yaml from PDF
create_config_from_document(
    doc_path="company_info.pdf",
    output_path="config.yaml",
    company_name="Acme Corp"
)

# Use config
agent = MemAgent(config_file="config.yaml")

🔥 Models

Works with any Ollama model:

Model Size Speed Quality
granite4:tiny-h 2.5GB ⚡⚡⚡ ⭐⭐
llama2 4GB ⚡⚡ ⭐⭐⭐
mistral 4GB ⚡⚡ ⭐⭐⭐⭐
llama3 5GB ⭐⭐⭐⭐⭐
ollama pull <model-name>

📦 Requirements

  • Python 3.8+
  • Ollama (for LLM)
  • 4GB RAM minimum
  • 5GB disk space

Dependencies (auto-installed):

  • requests >= 2.31.0
  • pyyaml >= 6.0.1

🐛 Troubleshooting

Ollama not running?

ollama serve

Model not found?

ollama pull granite4:tiny-h

Import error?

pip install mem-llm --upgrade

📄 License

MIT License - feel free to use in personal and commercial projects!


🔗 Links


🌟 Star us on GitHub!

If you find this useful, give us a ⭐ on GitHub!


Made with ❤️ by C. Emre Karataş

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem_llm-1.0.7.tar.gz (44.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem_llm-1.0.7-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file mem_llm-1.0.7.tar.gz.

File metadata

  • Download URL: mem_llm-1.0.7.tar.gz
  • Upload date:
  • Size: 44.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.7.tar.gz
Algorithm Hash digest
SHA256 b5e358233de7237e7a4ec0b09755ed34c2cf74a38e65a368d5f621b16ae0eb62
MD5 00e68c1f4296e41a1c9100e4b04ba89f
BLAKE2b-256 8bada64a7e3e0430118029e107d9eb72275ba5e0aa24ed1d9ee5cffdb509fbd3

See more details on using hashes here.

File details

Details for the file mem_llm-1.0.7-py3-none-any.whl.

File metadata

  • Download URL: mem_llm-1.0.7-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 ab3eb577787cd9cdaf3a7377c22d5e31e1049034781690a8fecba15377f643df
MD5 1c935e7d87dfce6ffbc70fed3ebbe160
BLAKE2b-256 3c88f5bc1ca433723bdc4f8cc8b69a2b767138fcbaceb144a713cf62328ef7cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page