Transparent memory system for LLM
Project description
llm-user-memory
A transparent memory system for LLM that automatically maintains and uses a user profile to provide personalized AI responses.
Installation
Install this plugin in the same environment as LLM:
llm install llm-user-memory
Usage
After installation, set up transparent memory integration:
llm memory install-shell
This adds a shell function that automatically injects your user profile into every LLM interaction. Restart your terminal or run:
source ~/.bashrc # or ~/.zshrc for zsh users
Now use LLM normally - your conversations will automatically include memory context:
llm "What should I work on today?"
# Response will be personalized based on your stored profile
llm "I just finished the memory plugin project"
# This information will be remembered for future conversations
The memory system works completely transparently. Your user profile is automatically:
- Injected as context in every prompt
- Updated in the background based on your conversations
- Stored locally in your LLM configuration directory
Features
Automatic Profile Building
The plugin automatically builds and maintains a user profile based on your conversations:
# First conversation
llm "I'm a Python developer working on machine learning projects"
# Later conversations automatically know this context
llm "What's the best way to optimize this model?"
# Response considers your Python/ML background
Transparent Operation
No need to remember special commands or flags. Once installed, the memory system works automatically:
# These all include memory context automatically:
llm "Help me debug this code"
llm -m gpt-4 "Explain quantum computing"
llm -t my-template "Process this data"
Profile Management
View and manage your stored profile:
# View current profile
llm memory show
# Clear profile and start fresh
llm memory clear
# Temporarily disable memory updates
llm memory pause
# Re-enable memory updates
llm memory resume
Background Updates
Profile updates happen in the background after each conversation, so they never slow down your interactions:
llm "I switched from JavaScript to Rust development"
# ✓ Response generated immediately
# ✓ Profile updated in background: "Updating memory..."
Privacy and Local Storage
All profile data is stored locally in your LLM configuration directory:
- No external services involved
- Profile updates use the same model you're already using
- Full control over your data
Memory Profile Structure
Your profile is stored as readable Markdown in ~/.config/llm/memory/profile.md:
# User Profile
## Personal Information
- Role: Python Developer
- Experience: 5+ years in machine learning
## Current Projects
- Working on LLM memory plugin
- Exploring transformer architectures
## Interests
- Natural language processing
- Open source development
- Performance optimization
## Preferences
- Prefers practical examples over theory
- Likes concise, actionable advice
Advanced Usage
Manual Profile Editing
You can manually edit your profile:
# Edit profile directly
$EDITOR "$(llm memory path)"
# Or use llm memory show and copy/edit content
llm memory show > temp_profile.md
# Edit temp_profile.md
llm memory load temp_profile.md
Shell Integration Details
The shell integration works by creating a function that wraps the llm command:
llm() {
command llm -f memory:auto "$@"
}
This automatically injects the memory:auto fragment on every call.
Uninstalling Shell Integration
To remove the transparent integration:
llm memory uninstall-shell
Then restart your terminal. You can still use memory manually with:
llm -f memory:auto "your prompt here"
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-user-memory
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e .
pip install -r requirements-dev.txt
To run the tests:
pytest
How It Works
The plugin uses LLM's fragment loader system to inject profile context and monitors the conversation database to trigger background profile updates:
- Fragment Injection: The
memory:autofragment loader reads your profile and injects it as context - Database Monitoring: A background process watches for new conversations in LLM's SQLite database
- Profile Updates: After each conversation, the same model you used gets a request to update your profile
- Transparent Operation: Shell function integration makes this completely automatic
Troubleshooting
Memory not working
Check if shell integration is active:
type llm
# Should show: llm is a function
Profile not updating
Check if background daemon is running:
llm memory status
Reset everything
llm memory clear
llm memory uninstall-shell
llm memory install-shell
Configuration
Memory behavior can be configured via environment variables:
# Disable background updates
export LLM_MEMORY_UPDATES=false
# Change update frequency (seconds)
export LLM_MEMORY_UPDATE_INTERVAL=10
# Disable memory system entirely
export LLM_MEMORY_DISABLED=true
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_user_memory-0.1.1.tar.gz.
File metadata
- Download URL: llm_user_memory-0.1.1.tar.gz
- Upload date:
- Size: 19.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e3b9a7794574379b65c1777e687a25a2ef9666c60d2a6978c17e8d7fb2b9b178
|
|
| MD5 |
40a121622687ed31a0e3404d7a6d7974
|
|
| BLAKE2b-256 |
d8b9ae66aa72613419b519d9126b59d811cccf0012eaf7ac407d930b15aa4fa2
|
File details
Details for the file llm_user_memory-0.1.1-py3-none-any.whl.
File metadata
- Download URL: llm_user_memory-0.1.1-py3-none-any.whl
- Upload date:
- Size: 11.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a0fcd78dd2eb9ce76c9673e9eb06cf026092fa43f38d8c315e199cf8cbe6499
|
|
| MD5 |
00bb53bd3f0a995f25ce576bab529285
|
|
| BLAKE2b-256 |
052a3a6961dc15d25aa527f4f38cab90e15aff7cc28a084988ae821e0ef1b09d
|