Local-first, cross-project development memory layer
Project description
Work Memory
Work Memory is a local-first development memory layer for VS Code and MCP clients. It stores cross-repository work history in SQLite, keeps an append-only JSONL archive for auditability, and exposes that shared memory through a local MCP server.
It is designed for one-machine-first use, but the codebase is portable and the memory store can be backed up, restored, or pointed at a synced location.
What It Does
- stores turns, notes, commands, files, entities, topics, and commit references
- imports local VS Code chat history from
workspaceStorage - serves that memory back into VS Code through MCP
- keeps retrieval deterministic with metadata-first filters before text search
- includes an RLM-inspired recursive recall mode with trajectory metadata and optional JSONL logs
Quick Start
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
work-memory init
work-memory-install-vscode
After that, reload VS Code and trust the work-memory MCP server when prompted.
For a published install from PyPI, use the package name work-memory-mcp:
pip install work-memory-mcp
VS Code Usage
If you use this from another VS Code window on the same machine, that other window can query the same history as long as it is configured to launch this MCP server. The memory data is already shared machine-wide by default under %LOCALAPPDATA%/work-memory, so the main requirement is MCP client configuration, not a separate database per window.
You do not need a global install just to use it across multiple VS Code windows on this computer. A workspace or user-level MCP config can point directly at this project's virtual environment:
{
"command": "C:/Project/work-memory/.venv/Scripts/python.exe",
"args": ["-m", "work_memory.mcp_server", "--transport", "stdio"]
}
If you want a more stable command that does not depend on this repo path, install the package into a dedicated environment and point VS Code at work-memory-mcp instead. That is a convenience and portability improvement, not a functional requirement.
To install it for all VS Code windows in your user profile, run:
work-memory-install-vscode
That script:
- creates a dedicated runtime under
%LOCALAPPDATA%/work-memory/runtime - installs or upgrades this project into that runtime
- updates your user-level VS Code
mcp.json - enables
chat.mcp.autoStart
This repository also includes a workspace-scoped example at .vscode/mcp.json.
If you use VS Code Settings Sync, enable MCP server synchronization and the user-level mcp.json entry will follow you across machines. You still need the runtime installed on each machine, but the server registration itself can sync.
Architecture
The MVP keeps the boundary small:
storage: initializes SQLite, writes immutable events, and appends raw JSONL records.extractor: derives tags, entities, and topics from text and metadata.retrieval: applies metadata filters first, then runs full-text search or topic matching.service: exposes a clean application interface that a future MCP wrapper can call.cli: provides local workflows without any cloud or LLM dependency.
Core concepts:
project: repo-level identity (repo_name,repo_path)session: scoped work session (session_id,branch,source)event: immutable stored record (turn,note,command)artifact: file and commit references extracted from an eventtags,entities,topics: normalized metadata for filtering and summarization
Setup
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
If you are installing from PyPI instead of from source:
pip install work-memory-mcp
Initialize storage. By default this now uses a machine-level directory under %LOCALAPPDATA%/work-memory so multiple repositories can share one memory store. Use --root only when you want an isolated project-local store.
work-memory init
Override the storage root for testing or local-only use:
work-memory --root C:/Project/work-memory init
Example Commands
Store a user turn:
work-memory store-turn \
--repo-name repo-a \
--repo-path C:/code/repo-a \
--branch main \
--session-id repo-a-2026-03-06 \
--role user \
--source cli \
--text "How should we model auth and deployment memory?" \
--tag architecture \
--file src/auth.py
Store an assistant response:
work-memory store-turn \
--repo-name repo-a \
--repo-path C:/code/repo-a \
--branch main \
--session-id repo-a-2026-03-06 \
--role assistant \
--source cli \
--text "Use metadata-first retrieval and keep raw history append-only." \
--tag memory \
--tag architecture
Store a command:
work-memory store-command \
--repo-name repo-a \
--repo-path C:/code/repo-a \
--branch main \
--session-id repo-a-2026-03-06 \
--source terminal \
--command "pytest -q" \
--cwd C:/code/repo-a \
--exit-code 0 \
--file tests/test_auth.py
Store a note:
work-memory store-note \
--repo-name repo-a \
--repo-path C:/code/repo-a \
--branch main \
--session-id repo-a-2026-03-06 \
--source manual \
--title "handoff" \
--text "Auth migration is blocked on Discord thread decisions." \
--tag auth \
--tag handoff
Project-scoped recall:
work-memory recall-project \
--repo-name repo-a \
--query auth \
--tag architecture
RLM-inspired recursive recall:
work-memory recursive-recall \
--query discord \
--max-depth 2 \
--branch-limit 3 \
--log-dir ./logs
Cross-project recall with explicit filters:
work-memory recall-cross-project \
--query discord \
--topic auth \
--entity Discord \
--repo-name repo-a \
--repo-name repo-b
Summarize a topic:
work-memory summarize-topic --topic memory --limit 10
Discover likely VS Code Copilot Chat sources on this machine:
work-memory discover-sources --limit 20
Import structured VS Code chat sessions as reconstructed user/assistant turns:
work-memory import-vscode-sessions --limit 200
Import discovered VS Code Copilot Chat resources into the shared store:
work-memory import-vscode-copilot --limit 200
Import an exported JSONL, JSON, or text transcript:
work-memory import-path \
--path C:/exports/copilot-chat.jsonl \
--repo-name imported-chat \
--repo-path C:/imports/copilot-chat \
--session-id import-2026-03-06 \
--source exported-chat \
--format auto
Import a VS Code chat session export or copied chatSessions/*.jsonl file with explicit structured reconstruction:
work-memory import-path \
--path C:/exports/session.jsonl \
--repo-name imported-chat \
--repo-path C:/imports/session \
--session-id import-2026-03-06 \
--source exported-chat \
--format vscode-chat-session
Seed Data
python scripts/seed_sample.py
That script creates example records across multiple repositories so you can validate project-scoped and cross-project retrieval.
Storage Layout
%LOCALAPPDATA%/work-memory/work_memory.db: shared SQLite database by default%LOCALAPPDATA%/work-memory/raw_events.jsonl: append-only archive by defaultdata/work_memory.db: used only when--rootpoints at a project rootdata/raw_events.jsonl: used only when--rootpoints at a project root
Raw history is never mutated after append. Search and recall operate from SQLite while the JSONL archive remains transparent for debugging or replay.
Current Import Reality
The importer can now do two different things for VS Code data:
import-vscode-sessionsreconstructs structured chat sessions fromworkspaceStorage/*/chatSessions/*.jsonlinto user and assistant turns.import-vscode-copilotpreserves Copilot chat resource files as raw imported notes for auditability and fallback coverage.
Repo inference now prefers workspace.json, which means single-folder VS Code workspaces can usually map imported session history back to the real repo path instead of a workspace hash.
MCP Server
Run the MCP server over stdio:
work-memory-mcp --transport stdio
Run it over streamable HTTP for shared local access:
work-memory-mcp --transport streamable-http --host 127.0.0.1 --port 8000 --json-response
The MCP layer exposes tools for:
memory_statussearch_memoryrecall_project_memoryrecall_cross_project_memoryrecursive_recall_memorysummarize_memory_topicstore_memory_notediscover_memory_sourcesimport_vscode_sessionsimport_vscode_copilot_resources
It also exposes a memory://status resource and a memory_query_plan prompt.
For a local MCP client that supports stdio, point it at:
{
"command": "C:/Project/work-memory/.venv/Scripts/python.exe",
"args": ["-m", "work_memory.mcp_server", "--transport", "stdio"]
}
GitHub And Other Machines
The codebase is portable. It is not bespoke to only this machine, but the stored memory is local-first, which means the database and JSONL archive live on each machine unless you explicitly copy or sync them.
What GitHub gives you:
- backup and version history for the code
- a simple install source for other machines
- a shareable project other people can clone and run
What GitHub does not give you automatically:
- sync of
%LOCALAPPDATA%/work-memory/work_memory.db - sync of
%LOCALAPPDATA%/work-memory/raw_events.jsonl
To use this on another computer:
git clone <your-repo-url>
cd work-memory
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
work-memory init
If you want to preserve existing history on a second machine, copy the storage directory or set WORK_MEMORY_HOME to a synced location before starting the server.
Examples:
$env:WORK_MEMORY_HOME = "D:/synced/work-memory"
work-memory init
work-memory-mcp --transport stdio
Portable backup and restore helpers are included:
work-memory-backup --destination C:/backups
work-memory-restore --backup C:/backups/work-memory-backup-20260306-120000.zip
If you want true cross-machine continuity instead of occasional backups, set WORK_MEMORY_HOME to a synced folder on each machine before starting the MCP server.
$env:WORK_MEMORY_HOME = "D:/synced/work-memory"
work-memory init
work-memory-install-vscode
PyPI Publishing
This repository is configured for GitHub Actions Trusted Publishing to PyPI.
Workflow file:
.github/workflows/publish-pypi.yml
Trusted Publisher values for PyPI:
- owner:
joe02740 - repository:
work-memory - workflow name:
publish-pypi.yml - environment name: leave blank
Release flow:
- Add the Trusted Publisher in the PyPI project settings for
work-memory-mcp. - Push this repository state to GitHub.
- Create a GitHub release such as
v0.1.0. - GitHub Actions builds
dist/*and publishes the package to PyPI.
Recommended Distribution Path
The practical path is now:
- Push the code to GitHub.
- Keep the local memory store out of git.
- Publish
work-memory-mcpto PyPI from a GitHub release. - Install from PyPI or from source on each machine.
- If you later want cross-machine history sync, add explicit export/import or a synced storage location.
That keeps the project local-first while still making it reusable by you and other people.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file work_memory_mcp-0.1.0.tar.gz.
File metadata
- Download URL: work_memory_mcp-0.1.0.tar.gz
- Upload date:
- Size: 33.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f8632c62c3cd6132b8dba314559b362c60fb32ebaeb4fe7d667dc612920a713
|
|
| MD5 |
21c3fa496ce72148a95a3cd224fa6e67
|
|
| BLAKE2b-256 |
f08573fba342162759e34ba88b3179fca71d1aade57c57f2bb911099a4871e77
|
Provenance
The following attestation bundles were made for work_memory_mcp-0.1.0.tar.gz:
Publisher:
publish-pypi.yml on joe02740/work-memory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
work_memory_mcp-0.1.0.tar.gz -
Subject digest:
2f8632c62c3cd6132b8dba314559b362c60fb32ebaeb4fe7d667dc612920a713 - Sigstore transparency entry: 1061823989
- Sigstore integration time:
-
Permalink:
joe02740/work-memory@0d7371819e9bddce76af067ff4cfee00f0d795ca -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/joe02740
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@0d7371819e9bddce76af067ff4cfee00f0d795ca -
Trigger Event:
release
-
Statement type:
File details
Details for the file work_memory_mcp-0.1.0-py3-none-any.whl.
File metadata
- Download URL: work_memory_mcp-0.1.0-py3-none-any.whl
- Upload date:
- Size: 30.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fde5ccd02affd8025ba9e5495bbdaa2e7fb16a6de5b394842e84a3ba5a2f2630
|
|
| MD5 |
5bbdd104fcb43a960720a9e2116c9daa
|
|
| BLAKE2b-256 |
28ac0b54e0aa82465cba86251e08fd622f7934b588b4148c78b355826a1b60dd
|
Provenance
The following attestation bundles were made for work_memory_mcp-0.1.0-py3-none-any.whl:
Publisher:
publish-pypi.yml on joe02740/work-memory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
work_memory_mcp-0.1.0-py3-none-any.whl -
Subject digest:
fde5ccd02affd8025ba9e5495bbdaa2e7fb16a6de5b394842e84a3ba5a2f2630 - Sigstore transparency entry: 1061823991
- Sigstore integration time:
-
Permalink:
joe02740/work-memory@0d7371819e9bddce76af067ff4cfee00f0d795ca -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/joe02740
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@0d7371819e9bddce76af067ff4cfee00f0d795ca -
Trigger Event:
release
-
Statement type: