Skip to main content

AI Memory Switcher โ€” Save, compress, and transfer context between AI agents

Project description

๐Ÿง  AiMem โ€” AI Memory Switcher

Save, compress, and transfer context seamlessly between AI agents. When Claude hits rate-limits or you're mid-flow, AiMem bridges the gap โ€” without losing your train of thought.

Agent A (Claude)  โ”€โ”€โ–ถ  Read Session  โ”€โ”€โ–ถ  Universal Format  โ”€โ”€โ–ถ  Save  โ”€โ”€โ–ถ  Agent B (Gemini)
  (JSONL)              (.jsonl)             (JSON)              (~/.aimem/)     (Markdown/Prompt)

Zero server. Zero Redis. Works offline. LLM compression is opt-in.


Install

pip install aimem-cli

Or from source:

git clone https://github.com/ThangTo/AiMem.git
cd AiMem
pip install -e .

Quick Start

# 1. Save current Claude session
aimem save --from claude

# 2. Load into Gemini (auto-copied to clipboard)
aimem load sess-abc123 --to gemini

# That's it. Paste into Gemini CLI.

Commands

aimem init                          Initialize config (~/.aimem/config.json)
aimem save --from claude            Save session (interactive if multiple)
aimem save --from clipboard         Save from system clipboard
aimem save --from qwen              Save from Qwen CLI
aimem save --from gemini            Save from Gemini CLI
aimem save --from aider             Save from Aider chat history
aimem save --from continue          Save from Continue.dev
aimem load <id> --to gemini          Load session as target format
aimem list                           List saved sessions
aimem list --agents                  Check which agents are available
aimem config                         Show current config
aimem config set key=value           Update config
aimem delete <id>                    Delete a saved session

Source agents (--from)

Agent Storage Inject?
claude ~/.claude/projects/*/*.jsonl โœ…
gemini ~/.gemini/tmp/*/chats/*.json โœ…
qwen ~/.qwen/tmp/*/logs.json โœ…
opencode ~/.opencode/sessions/*.json โœ…
codex ~/.codex/sessions/*.jsonl โœ…
aider ~/.aider.chat.history.md โŒ
continue ~/.continue/sessions.db โŒ
clipboard System clipboard โŒ

Target formats (--to)

Format Best for Inject?
markdown Paste into any web UI or tool โŒ
claude Claude Code CLI โœ…
gemini Gemini CLI โœ…
qwen Qwen CLI โœ…
opencode OpenCode CLI โœ…
codex Codex CLI โœ…
continue Continue.dev (VS Code) โŒ
prompt API calls / custom injection โŒ

Auto-Inject (NEW!)

Inject directly into target agent storage โ€” no copy-paste needed:

# Inject into Gemini (appears in session list)
aimem load claude-62d520bb --to gemini --inject
gemini --resume latest

# Inject into OpenCode
aimem load claude-62d520bb --to opencode --inject
opencode -s ses_xxx

# Inject into Claude
aimem load claude-62d520bb --to claude --inject
claude --resume

Configuration

Config: ~/.aimem/config.json

# Show config
aimem config

# Enable LLM compression (opt-in)
aimem config set compression.enabled true
aimem config set compression.api_key YOUR_GROQ_KEY

# Switch compression provider
aimem config set compression.provider groq
aimem config set compression.provider gemini

# Change output format
aimem config set output.format markdown
aimem config set output.clipboard_auto true

# Enable Redis cache (optional)
aimem config set storage.redis.enabled true
aimem config set storage.redis.host localhost
aimem config set storage.redis.ttl 3600

Architecture

aimem/
โ”œโ”€โ”€ aimem/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ cli.py              # CLI interface + all commands
โ”‚   โ”œโ”€โ”€ models.py           # UniversalSession, Message, CompressedSession
โ”‚   โ”œโ”€โ”€ storage.py          # FileStorage (default) + RedisCache (opt-in)
โ”‚   โ”œโ”€โ”€ compression.py      # LLM Compression Engine (Groq / Gemini)
โ”‚   โ””โ”€โ”€ adapters/
โ”‚       โ”œโ”€โ”€ claude.py       # Read ~/.claude/projects/*/*.jsonl
โ”‚       โ”œโ”€โ”€ qwen.py          # Read ~/.qwen/tmp/*/logs.json (gzip)
โ”‚       โ”œโ”€โ”€ gemini.py        # Read ~/.config/gemini/ (JSON/JSONL)
โ”‚       โ”œโ”€โ”€ aider.py         # Read ~/.aider.chat.history.md
โ”‚       โ”œโ”€โ”€ continue_dev.py  # Read ~/.continue/sessions.db (SQLite)
โ”‚       โ”œโ”€โ”€ clipboard.py     # Read system clipboard
โ”‚       โ””โ”€โ”€ output/
โ”‚           โ””โ”€โ”€ __init__.py  # Markdown, Claude, Gemini, Qwen, Continue, Prompt
โ””โ”€โ”€ aimem_main.py           # Entry point (run without install)

Universal Session Format

Every agent's session is converted to this neutral JSON format:

{
  "id": "claude-62d520bb",
  "source": "claude",
  "messages": [
    {"id": "...", "role": "user", "content": "...", "timestamp": "..."},
    {"id": "...", "role": "assistant", "content": "...", "timestamp": "..."}
  ],
  "context_items": [],
  "compressed": {
    "current_goal": "Build context transfer tool",
    "latest_code": [{"path": "cli.py", "content": "...", "language": "python"}],
    "current_errors": ["Error: undefined is not a function"],
    "key_decisions": ["Use file-based storage over Redis"],
    "todo_list": ["Write Continue.dev adapter", "Add compression"]
  },
  "metadata": {
    "source_agent": "claude",
    "original_session_id": "62d520bb",
    "project_path": "D:\\Project\\AiMem",
    "model": "claude-sonnet-4-6"
  },
  "created_at": "2026-04-18T08:11:00Z"
}

LLM Compression (Opt-in)

When --compress is used (or compression.enabled is true in config):

Input: ~114k tokens (Claude session) + Groq API call
                          โ†“
Output: ~2k tokens (compressed summary)
                          โ†“
Save to ~/.aimem/sessions/ as JSON

Providers:

  • Groq llama-3.1-8b-instant โ€” Fast + Cheap (~$0.001/session)
  • Gemini gemini-2.0-flash-exp โ€” Google's fast model

Requires compression.api_key to be set.


Workflow Example

# You're coding in Claude, session hits limit
$ aimem save --from claude
[i] Found 78 Claude sessions. Select one:
  [1] 2026-04-18T08:11 | Tรดi muแป‘n tแบกo mแป™t tool giรบp chuyแปƒn context...
  [2] 2026-04-17T14:46 | Research how different AI CLI agents...
Enter number (default=1): 1
[OK] Saved session: claude-62d520bb

# Switch to Gemini, paste context
$ aimem load claude-62d520bb --to gemini
## Previous Context

Continuing from previous session:

Project: `D:\Project\AiMem`
**Goal:** Build AiMem - context transfer tool

**Key Decisions:**
- File-based storage (no Redis dependency)
- Python-first for AI/ML ecosystem
- Opt-in compression (not required)

**Todo:**
- [ ] Write Continue.dev adapter
- [ ] Add session interactive selection

---
**Continue from here.**

# Copied to clipboard automatically

Roadmap

Phase Status Description
Phase 1 โœ… DONE MVP โ€” Claude, Qwen, Gemini, Clipboard, Aider, Continue adapters
Phase 2 โš ๏ธ PARTIAL LLM Compression โ€” Groq works, Gemini blocked (API issue)
Phase 3 โœ… DONE Output adapters for all agents
Phase 4 ๐Ÿ”œ NEXT Smart chunking + context window management
Phase 5 ๐Ÿ“‹ PLANNED VS Code Extension
Phase 6 ๐Ÿ“‹ PLANNED GUI (optional TUI mode)

Design Decisions

Why File-Based by Default?

Approach Setup Time Portability User Friction
Redis 5-10 min Low (server-dependent) High
File (AiMem) 0 min (works immediately) High (copy config file) Low

AiMem saves sessions as plain JSON in ~/.aimem/sessions/ โ€” no server, no daemon, no Redis. Transfer a session from your laptop to a server with one aimem load + aimem save.

Why Python?

  • Python is pre-installed on most developer machines
  • Native ecosystem for AI/ML tools (LLM APIs, SQLite parsing, etc.)
  • Easy to extend with pip install aimem
  • Native packaging via pyproject.toml

Why LLM Compression is Opt-in?

  • Not every user has an API key
  • Raw transfer works fine for most cases
  • Compression adds latency (1-3s per save)
  • Non-deterministic โ€” may lose nuance

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aimem_cli-0.2.0.tar.gz (47.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aimem_cli-0.2.0-py3-none-any.whl (54.4 kB view details)

Uploaded Python 3

File details

Details for the file aimem_cli-0.2.0.tar.gz.

File metadata

  • Download URL: aimem_cli-0.2.0.tar.gz
  • Upload date:
  • Size: 47.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for aimem_cli-0.2.0.tar.gz
Algorithm Hash digest
SHA256 c4f1dbc2c122b94ef5ad35f16fe093d5ff9f96cbd96d04d97002ac7887e3ad54
MD5 16b64ae8254ce225e1de76559e540408
BLAKE2b-256 fb9b55f215215ee2ebaff8931199b0ae2879012dc108fc7e2dce190ef0ac77e9

See more details on using hashes here.

File details

Details for the file aimem_cli-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: aimem_cli-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 54.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for aimem_cli-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9973a232aaae5fc69027e99eea389336eb6c9c140741ded0b02808c1fe76e7f6
MD5 78fd48d042d5cc0d91eacab8fa7ea673
BLAKE2b-256 51d52eea1bceb4334d7d42ebd6f32e38cb1c56c8dea8e902022aeadd8649bc63

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page