AI Memory Switcher โ Save, compress, and transfer context between AI agents
Project description
๐ง AiMem โ AI Memory Switcher
Save, compress, and transfer context seamlessly between AI agents. When Claude hits rate-limits or you're mid-flow, AiMem bridges the gap โ without losing your train of thought.
Agent A (Claude) โโโถ Read Session โโโถ Universal Format โโโถ Save โโโถ Agent B (Gemini)
(JSONL) (.jsonl) (JSON) (~/.aimem/) (Markdown/Prompt)
Zero server. Zero Redis. Works offline. LLM compression is opt-in.
Install
pip install aimem
Or from source:
git clone https://github.com/aimem/aimem.git
cd aimem
pip install -e .
Quick Start
# 1. Save current Claude session
aimem save --from claude
# 2. Load into Gemini (auto-copied to clipboard)
aimem load sess-abc123 --to gemini
# That's it. Paste into Gemini CLI.
Commands
aimem init Initialize config (~/.aimem/config.json)
aimem save --from claude Save session (interactive if multiple)
aimem save --from clipboard Save from system clipboard
aimem save --from qwen Save from Qwen CLI
aimem save --from gemini Save from Gemini CLI
aimem save --from aider Save from Aider chat history
aimem save --from continue Save from Continue.dev
aimem load <id> --to gemini Load session as target format
aimem list List saved sessions
aimem list --agents Check which agents are available
aimem config Show current config
aimem config set key=value Update config
aimem delete <id> Delete a saved session
Source agents (--from)
| Agent | Storage | Inject? |
|---|---|---|
claude |
~/.claude/projects/*/*.jsonl |
โ |
gemini |
~/.gemini/tmp/*/chats/*.json |
โ |
qwen |
~/.qwen/tmp/*/logs.json |
โ |
opencode |
~/.opencode/sessions/*.json |
โ |
codex |
~/.codex/sessions/*.jsonl |
โ |
aider |
~/.aider.chat.history.md |
โ |
continue |
~/.continue/sessions.db |
โ |
clipboard |
System clipboard | โ |
Target formats (--to)
| Format | Best for | Inject? |
|---|---|---|
markdown |
Paste into any web UI or tool | โ |
claude |
Claude Code CLI | โ |
gemini |
Gemini CLI | โ |
qwen |
Qwen CLI | โ |
opencode |
OpenCode CLI | โ |
codex |
Codex CLI | โ |
continue |
Continue.dev (VS Code) | โ |
prompt |
API calls / custom injection | โ |
Auto-Inject (NEW!)
Inject directly into target agent storage โ no copy-paste needed:
# Inject into Gemini (appears in session list)
aimem load claude-62d520bb --to gemini --inject
gemini --resume latest
# Inject into OpenCode
aimem load claude-62d520bb --to opencode --inject
opencode -s ses_xxx
# Inject into Claude
aimem load claude-62d520bb --to claude --inject
claude --resume
Configuration
Config: ~/.aimem/config.json
# Show config
aimem config
# Enable LLM compression (opt-in)
aimem config set compression.enabled true
aimem config set compression.api_key YOUR_GROQ_KEY
# Switch compression provider
aimem config set compression.provider groq
aimem config set compression.provider gemini
# Change output format
aimem config set output.format markdown
aimem config set output.clipboard_auto true
# Enable Redis cache (optional)
aimem config set storage.redis.enabled true
aimem config set storage.redis.host localhost
aimem config set storage.redis.ttl 3600
Architecture
aimem/
โโโ aimem/
โ โโโ __init__.py
โ โโโ cli.py # CLI interface + all commands
โ โโโ models.py # UniversalSession, Message, CompressedSession
โ โโโ storage.py # FileStorage (default) + RedisCache (opt-in)
โ โโโ compression.py # LLM Compression Engine (Groq / Gemini)
โ โโโ adapters/
โ โโโ claude.py # Read ~/.claude/projects/*/*.jsonl
โ โโโ qwen.py # Read ~/.qwen/tmp/*/logs.json (gzip)
โ โโโ gemini.py # Read ~/.config/gemini/ (JSON/JSONL)
โ โโโ aider.py # Read ~/.aider.chat.history.md
โ โโโ continue_dev.py # Read ~/.continue/sessions.db (SQLite)
โ โโโ clipboard.py # Read system clipboard
โ โโโ output/
โ โโโ __init__.py # Markdown, Claude, Gemini, Qwen, Continue, Prompt
โโโ aimem_main.py # Entry point (run without install)
Universal Session Format
Every agent's session is converted to this neutral JSON format:
{
"id": "claude-62d520bb",
"source": "claude",
"messages": [
{"id": "...", "role": "user", "content": "...", "timestamp": "..."},
{"id": "...", "role": "assistant", "content": "...", "timestamp": "..."}
],
"context_items": [],
"compressed": {
"current_goal": "Build context transfer tool",
"latest_code": [{"path": "cli.py", "content": "...", "language": "python"}],
"current_errors": ["Error: undefined is not a function"],
"key_decisions": ["Use file-based storage over Redis"],
"todo_list": ["Write Continue.dev adapter", "Add compression"]
},
"metadata": {
"source_agent": "claude",
"original_session_id": "62d520bb",
"project_path": "D:\\Project\\AiMem",
"model": "claude-sonnet-4-6"
},
"created_at": "2026-04-18T08:11:00Z"
}
LLM Compression (Opt-in)
When --compress is used (or compression.enabled is true in config):
Input: ~114k tokens (Claude session) + Groq API call
โ
Output: ~2k tokens (compressed summary)
โ
Save to ~/.aimem/sessions/ as JSON
Providers:
- Groq
llama-3.1-8b-instantโ Fast + Cheap (~$0.001/session) - Gemini
gemini-2.0-flash-expโ Google's fast model
Requires compression.api_key to be set.
Workflow Example
# You're coding in Claude, session hits limit
$ aimem save --from claude
[i] Found 78 Claude sessions. Select one:
[1] 2026-04-18T08:11 | Tรดi muแปn tแบกo mแปt tool giรบp chuyแปn context...
[2] 2026-04-17T14:46 | Research how different AI CLI agents...
Enter number (default=1): 1
[OK] Saved session: claude-62d520bb
# Switch to Gemini, paste context
$ aimem load claude-62d520bb --to gemini
## Previous Context
Continuing from previous session:
Project: `D:\Project\AiMem`
**Goal:** Build AiMem - context transfer tool
**Key Decisions:**
- File-based storage (no Redis dependency)
- Python-first for AI/ML ecosystem
- Opt-in compression (not required)
**Todo:**
- [ ] Write Continue.dev adapter
- [ ] Add session interactive selection
---
**Continue from here.**
# Copied to clipboard automatically
Roadmap
| Phase | Status | Description |
|---|---|---|
| Phase 1 | โ DONE | MVP โ Claude, Qwen, Gemini, Clipboard, Aider, Continue adapters |
| Phase 2 | โ ๏ธ PARTIAL | LLM Compression โ Groq works, Gemini blocked (API issue) |
| Phase 3 | โ DONE | Output adapters for all agents |
| Phase 4 | ๐ NEXT | Smart chunking + context window management |
| Phase 5 | ๐ PLANNED | VS Code Extension |
| Phase 6 | ๐ PLANNED | GUI (optional TUI mode) |
Design Decisions
Why File-Based by Default?
| Approach | Setup Time | Portability | User Friction |
|---|---|---|---|
| Redis | 5-10 min | Low (server-dependent) | High |
| File (AiMem) | 0 min (works immediately) | High (copy config file) | Low |
AiMem saves sessions as plain JSON in ~/.aimem/sessions/ โ no server, no daemon, no Redis. Transfer a session from your laptop to a server with one aimem load + aimem save.
Why Python?
- Python is pre-installed on most developer machines
- Native ecosystem for AI/ML tools (LLM APIs, SQLite parsing, etc.)
- Easy to extend with
pip install aimem - Native packaging via
pyproject.toml
Why LLM Compression is Opt-in?
- Not every user has an API key
- Raw transfer works fine for most cases
- Compression adds latency (1-3s per save)
- Non-deterministic โ may lose nuance
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aimem_cli-0.1.2.tar.gz.
File metadata
- Download URL: aimem_cli-0.1.2.tar.gz
- Upload date:
- Size: 47.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f5b245a3368473461a8642add5aff23cbc90091b6d00db564c03411c1a158c03
|
|
| MD5 |
f382d29183c3f013d214ec1748a1cd33
|
|
| BLAKE2b-256 |
d25fb242a2729a141e6adbb3ba5c90e7d69ac71c83974e0e89bf6dda5ef5fec6
|
File details
Details for the file aimem_cli-0.1.2-py3-none-any.whl.
File metadata
- Download URL: aimem_cli-0.1.2-py3-none-any.whl
- Upload date:
- Size: 54.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bc7b2187db3b700d1780fda51041c4d42cb83b2add0a9b8c118f0dc4d4520014
|
|
| MD5 |
73e3cd9e95f0557ae8fd30d9d810bd80
|
|
| BLAKE2b-256 |
d46b4d50f46ec0ba130c5d56b931ab0642ce22ffe2a485352774c3da8460268e
|