Skip to main content

Memory-enabled AI assistant with local LLM support

Project description

🧠 mem-llm

Memory-enabled AI assistant that remembers conversations using local LLMs

Python PyPI License


🎯 What is it?

A lightweight Python library that adds persistent memory to local LLM chatbots. Each user gets their own conversation history that the AI remembers across sessions.

Perfect for:

  • 💬 Customer service chatbots
  • 🤖 Personal AI assistants
  • 📝 Context-aware applications
  • 🏢 Business automation

⚡ Quick Start

1. Install

pip install mem-llm

2. Setup Ollama (one-time)

# Install: https://ollama.ai/download
ollama serve

# Download model (only 2.5GB)
ollama pull granite4:tiny-h

3. Use

from mem_llm import MemAgent

# Create agent (one line!)
agent = MemAgent()

# Set user
agent.set_user("john")

# Chat - it remembers!
agent.chat("My name is John")
agent.chat("What's my name?")  # → "Your name is John"

💡 Features

Feature Description
🧠 Memory Remembers each user's conversation history
👥 Multi-user Separate memory for each user
🔒 Privacy 100% local, no cloud/API needed
Fast Lightweight SQLite/JSON storage
🎯 Simple 3 lines of code to get started

📖 Usage Examples

Basic Chat

from mem_llm import MemAgent

agent = MemAgent()
agent.set_user("alice")

# First conversation
agent.chat("I love pizza")

# Later...
agent.chat("What's my favorite food?")
# → "Your favorite food is pizza"

Customer Service Bot

agent = MemAgent()

# Customer 1
agent.set_user("customer_001")
agent.chat("My order #12345 is delayed")

# Customer 2 (different memory!)
agent.set_user("customer_002")
agent.chat("I want to return item #67890")

Check User Profile

# Get automatically extracted user info
profile = agent.get_user_profile()
# {'name': 'Alice', 'favorite_food': 'pizza', 'location': 'NYC'}

🔧 Configuration

JSON Memory (default - simple)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=False,  # Use JSON files
    memory_dir="memories"
)

SQL Memory (advanced - faster)

agent = MemAgent(
    model="granite4:tiny-h",
    use_sql=True,  # Use SQLite
    memory_dir="memories.db"
)

Custom Settings

agent = MemAgent(
    model="llama2",  # Any Ollama model
    ollama_url="http://localhost:11434"
)

📚 API Reference

MemAgent

# Initialize
agent = MemAgent(model="granite4:tiny-h", use_sql=False)

# Set active user
agent.set_user(user_id: str, name: Optional[str] = None)

# Chat
response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str

# Get profile
profile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict

# System check
status = agent.check_setup() -> Dict

🎨 Advanced: PDF/DOCX Config

Generate config from business documents:

from mem_llm import create_config_from_document

# Create config.yaml from PDF
create_config_from_document(
    doc_path="company_info.pdf",
    output_path="config.yaml",
    company_name="Acme Corp"
)

# Use config
agent = MemAgent(config_file="config.yaml")

🔥 Models

Works with any Ollama model:

Model Size Speed Quality
granite4:tiny-h 2.5GB ⚡⚡⚡ ⭐⭐
llama2 4GB ⚡⚡ ⭐⭐⭐
mistral 4GB ⚡⚡ ⭐⭐⭐⭐
llama3 5GB ⭐⭐⭐⭐⭐
ollama pull <model-name>

📦 Requirements

  • Python 3.8+
  • Ollama (for LLM)
  • 4GB RAM minimum
  • 5GB disk space

Dependencies (auto-installed):

  • requests >= 2.31.0
  • pyyaml >= 6.0.1

🐛 Troubleshooting

Ollama not running?

ollama serve

Model not found?

ollama pull granite4:tiny-h

Import error?

pip install mem-llm --upgrade

📄 License

MIT License - feel free to use in personal and commercial projects!


🔗 Links


🌟 Star us on GitHub!

If you find this useful, give us a ⭐ on GitHub!


Made with ❤️ by C. Emre Karataş

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mem_llm-1.0.6.tar.gz (45.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mem_llm-1.0.6-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file mem_llm-1.0.6.tar.gz.

File metadata

  • Download URL: mem_llm-1.0.6.tar.gz
  • Upload date:
  • Size: 45.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.6.tar.gz
Algorithm Hash digest
SHA256 10c795a4c784525806ac1fd95cb24882a13bec1b51cfc090f37e54385dce51ac
MD5 d8b55652d04a60ebb7a56e2970a87738
BLAKE2b-256 f7fc56671ed42842d7306ad9eac0a54a3f2037960d3b4797c541857a95fc414b

See more details on using hashes here.

File details

Details for the file mem_llm-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: mem_llm-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 28.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mem_llm-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 5ec1937738ee8db843e8340b57b772a41c50c828742fb25d1b548432e7440b2a
MD5 f0b24d8296d6472035625cc3d13e30ed
BLAKE2b-256 57969e1e010d331623ebee02df6c3611bf277af204b6510afe89edb3823bdc7d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page