Skip to main content

AI persistent memory layer for VS Code Copilot

Project description

engaku

AI persistent memory layer for VS Code Copilot — keeps project context, rules, and active tasks in front of the agent at every turn through VS Code Agent Hooks.

What it does

engaku gives VS Code Copilot durable project memory stored in .ai/ Markdown files. Agent Hooks automatically inject current context into every conversation, surface active-task steps on each prompt, and remind the agent when a task plan is complete and ready for review.

Installation

pip install engaku

Or install directly from source:

pip install git+https://github.com/JorgenLiu/engaku.git

Quick Start

# Bootstrap .ai/ and .github/ structure in your repo
engaku init

After running init, VS Code Agent Hooks are active. The @coder, @planner, @reviewer, and @scanner agents are available via .github/agents/. No further manual steps are needed — hooks fire automatically on SessionStart, SubagentStart, UserPromptSubmit, Stop, and PreCompact.

What engaku init creates

.ai/
  overview.md       — project description, constraints, tech stack
  engaku.json       — model, MCP tool, and hook Python runtime config
  tasks/            — planner-managed task plans
  decisions/        — architecture decision records
.github/
  copilot-instructions.md   — global agent rules
  agents/           — coder, planner, reviewer, scanner agent definitions
  instructions/     — lessons and agent-boundaries.instructions.md stubs
  skills/           — bundled skills (systematic-debugging, verification-before-completion, etc.)
.vscode/
  settings.json     — enables VS Code custom agent hooks
  mcp.json          — MCP server configuration (chrome-devtools, context7, dbhub)
  dbhub.toml        — DBHub MCP guardrail/template config

engaku init --no-mcp skips both .vscode/mcp.json and .vscode/dbhub.toml, along with the MCP-related skills.

When MCP support is enabled, engaku init grants chrome-devtools/* to the planner agent by default (alongside context7/* and dbhub/*), so planner can run browser-backed research and verification before producing plans. engaku update does not modify an existing .ai/engaku.json — once written, your MCP tool allocations stay user-owned.

Subcommands

Command Purpose
init Bootstrap .ai/, .github/ structure and install VS Code Agent Hooks
inject Inject .ai/overview.md + active-task context (SessionStart / PreCompact hook)
prompt-check Detect rule/constraint in user prompt and inject active-task steps (UserPromptSubmit hook)
task-review Detect completed task plans and emit handoff reminder (Stop hook)
apply Apply .ai/engaku.json model, MCP tool, and hook Python runtime config to .github/agents/ frontmatter
update Sync generated agents and skills from bundled templates, merge MCP server additions, and apply .ai/engaku.json config

How it works

After engaku init, five Agent Hooks fire automatically:

  • SessionStartengaku inject: injects overview.md and the active-task's remaining unchecked steps at the start of every session.
  • PreCompactengaku inject: injects the full task body (Background, Design, File Map, and all checkbox lines) before conversation compaction so the compact model retains full task context.
  • SubagentStartengaku inject: gives reviewer subagent sessions the same project and active-task context before verification begins.
  • UserPromptSubmitengaku prompt-check: scans each user prompt for new rules or constraints and injects all remaining unchecked task steps as a system message so the agent always knows what to do next.
  • Stopengaku task-review: after each agent turn, checks whether all steps in an in-progress task plan are ticked and emits a handoff reminder if so.

Requirements

  • Python 3.8 or newer (stdlib only, no third-party dependencies)
  • VS Code with GitHub Copilot

Python 3.8 baseline: v1.1.x continues to support Python 3.8. The future Python 3.11 migration remains deferred.

Configuration

Hook Python interpreter

By default, generated Agent Hooks call engaku <subcommand> directly, relying on engaku being on the system PATH. If engaku is only available inside a virtual environment, set the python key in .ai/engaku.json and run engaku apply (or engaku update) to rewrite all hook commands:

{
  "python": ".venv/bin/python"
}

With this set, engaku apply rewrites every Engaku-managed hook command to .venv/bin/python -m engaku <subcommand>. Relative and absolute interpreter paths are both accepted. Set to null (the default) to restore the plain engaku <subcommand> form.

If the default engaku command is already broken, run the interpreter directly to apply the change:

.venv/bin/python -m engaku apply

Global kernel and lossless compactness

Engaku policy lives in .github/copilot-instructions.md as an Engaku Global Kernel: agent ownership boundaries, Caveman-inspired lossless compactness rules, and generated artifact style in one unconditional file. .github/instructions/ remains path-specific; hooks inject dynamic state only.

Lossless compactness: preserve complete technical substance (code, paths, commands, exact error text, decisions, verification results) while removing ceremony — no Now let me… filler, no repeated summaries, no arbitrary answer caps.

Teams that want Caveman's exact compression modes can install it separately: npx skills add JuliusBrussee/caveman -a github-copilot. Engaku uses its own Caveman-inspired rules and does not copy upstream skill text.

MCP Servers

engaku init creates .vscode/mcp.json with three preconfigured MCP servers that give VS Code Copilot structured tool access to browser automation, live library documentation, and databases. Use engaku init --no-mcp to skip this entirely.

engaku update adds any missing server entries to an existing .vscode/mcp.json without overwriting your customizations.

chrome-devtools-mcp

github.com/ChromeDevTools/chrome-devtools-mcp — Browser automation and DevTools via Puppeteer. Provides screenshot capture, page navigation, element interaction, JavaScript evaluation, Lighthouse performance audits, and network request inspection.

Prerequisites: Node.js + Chrome

{
  "chrome-devtools": {
    "command": "npx",
    "args": ["-y", "chrome-devtools-mcp@latest", "--headless"]
  }
}

context7

github.com/upstash/context7 — Live, version-specific library documentation. Two tools: resolve-library-id (search by name) and query-docs (fetch current docs). HTTP remote mode — no local process needed.

Prerequisites: None (network access only). Set CONTEXT7_API_KEY env var for higher rate limits.

{
  "context7": {
    "type": "http",
    "url": "https://mcp.context7.com/mcp"
  }
}

dbhub

github.com/bytebase/dbhub — Multi-database access supporting PostgreSQL, MySQL, MariaDB, SQL Server, and SQLite. Two tools: search_objects (schema exploration) and execute_sql (query execution).

Prerequisites: Node.js. Requires a DSN connection string (VS Code prompts on first use; the prompt is password-protected so the value is not stored in plain text). engaku init generates .vscode/dbhub.toml as an editable template with readonly = true and max_rows = 1000 guardrails; secrets stay in the password-protected MCP input and never touch the TOML file.

{
  "dbhub": {
    "type": "stdio",
    "command": "npx",
    "args": ["-y", "@bytebase/dbhub@latest", "--transport", "stdio", "--config", "${workspaceFolder}/.vscode/dbhub.toml"],
    "env": {
      "DBHUB_DSN": "${input:db-dsn}"
    }
  }
}

The generated .vscode/dbhub.toml:

[[sources]]
id   = "default"
dsn  = "${DBHUB_DSN}"
lazy = true

[[tools]]
name     = "execute_sql"
source   = "default"
readonly = true
max_rows = 1000

Edit .vscode/dbhub.toml to add more sources, enable writes, or change row limits. For inline --dsn without a TOML file (manual override), replace --config … and the env block with "--dsn", "${input:db-dsn}" directly in .vscode/mcp.json.

Optional MCP Servers

These servers are not generated by engaku init. Add them manually to .vscode/mcp.json when needed.

GitHub MCP

Interact with GitHub repositories, issues, and pull requests via the official GitHub MCP server.

{
  "github": {
    "type": "http",
    "url": "https://api.githubcopilot.com/mcp/"
  }
}

Authenticate with OAuth or a Personal Access Token. Use the toolSets option to restrict permissions (e.g. read-only mode). See GitHub MCP docs for available toolsets and scopes.

Firecrawl MCP

Structured web scraping and search via Firecrawl. Useful for extracting content from web pages that Context7 does not index.

{
  "inputs": [
    {
      "type": "promptString",
      "id": "firecrawl-key",
      "description": "Firecrawl API key",
      "password": true
    }
  ],
  "servers": {
    "firecrawl": {
      "command": "npx",
      "args": ["-y", "firecrawl-mcp"],
      "env": {
        "FIRECRAWL_API_KEY": "${input:firecrawl-key}"
      }
    }
  }
}

Requires a Firecrawl API key. Not a default dependency — add only when structured web research is needed.

Bundled Skills

skill-authoring

Helper workflow for turning a repeated multi-step method into a reusable Copilot skill. Different from VS Code's /create-skill command: this skill enforces an explicit primitive-selection gate (instruction file vs prompt file vs skill vs custom agent), draws a hard prompt-file-vs-skill boundary, and locks in an ownership rule — skills authored with this workflow stay user-owned and are not registered in Engaku's bundled template inventory unless an Engaku task explicitly ships them.

Use it when you notice the same phases, safeguards, and output format being re-explained across sessions and a one-shot prompt would not capture the adaptive logic between phases.

Credits

karpathy-guidelines skill

Adapted from forrestchang/andrej-karpathy-skills (MIT, Copyright © Forrest Chang), itself derived from Andrej Karpathy's observations.

MCP Servers

  • chrome-devtools-mcp — browser automation and DevTools (Chrome DevTools team)
  • context7 — live library documentation (Upstash)
  • dbhub — multi-database access (Bytebase)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

engaku-1.1.13.tar.gz (55.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

engaku-1.1.13-py3-none-any.whl (52.0 kB view details)

Uploaded Python 3

File details

Details for the file engaku-1.1.13.tar.gz.

File metadata

  • Download URL: engaku-1.1.13.tar.gz
  • Upload date:
  • Size: 55.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engaku-1.1.13.tar.gz
Algorithm Hash digest
SHA256 903f04921641ffae5910678bf9e0c1d40f0ebc588d9fd76ee05375a7d57f6d29
MD5 09d18f4b8f3336723cf01d5a200de5a5
BLAKE2b-256 8de8cb64ae0865e95f71f2096a39dfd045ef4a089db1fcb9777b9e847c0e1392

See more details on using hashes here.

Provenance

The following attestation bundles were made for engaku-1.1.13.tar.gz:

Publisher: publish.yml on JorgenLiu/engaku

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file engaku-1.1.13-py3-none-any.whl.

File metadata

  • Download URL: engaku-1.1.13-py3-none-any.whl
  • Upload date:
  • Size: 52.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for engaku-1.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 9f595b9754f32a76ecdf299f41db9b0f65f06a78fd62b565fe26a8b81d12e4c0
MD5 b385b285367cfcd4b87e603f45a6ebd0
BLAKE2b-256 632628f472d9b403553c379d451d88bc16e7e6577a86cc08be0923df4eab6320

See more details on using hashes here.

Provenance

The following attestation bundles were made for engaku-1.1.13-py3-none-any.whl:

Publisher: publish.yml on JorgenLiu/engaku

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page