Skip to main content

MCP server bridging specs (Linear / JIRA / GitHub Issues / Notion / Markdown / Figma) to tests, with bidirectional traceability and a spec-quality coach (AI 規格大師)

Project description

mk-spec-master logo

MK Spec Master

AI 規格大師 — specs in, scenarios out. Bidirectional traceability so you always know what's tested.

English · 繁體中文

PyPI License: MIT Status: Alpha

Spec-driven testing over MCP. Turn Linear / JIRA / GitHub Issues / Notion / Figma / Markdown specs into runnable scenarios, hand off to any test runner via mk-qa-master, and keep a live spec ↔ test coverage matrix.

🟢 Alpha — v0.3 complete. 15 tools + 6 adapters. Full design in docs/prd.md. Next stop: v1.0 (docs hardening, integration recipes, production-ready).


What this is

An MCP server that turns specs — Linear tickets, JIRA stories, GitHub Issues, Notion pages, Figma annotations, plain Markdown — into structured test scenarios, hands them to any test runner (via mk-qa-master or directly), and maintains a live spec ↔ test coverage matrix.

Sibling to mk-qa-master in the mk-* family of opinionated AI-QA MCPs.

What this is NOT

It's not Use this instead
A spec editor Linear / JIRA / Notion / Markdown — keep writing specs where you already do
A test runner mk-qa-master (pytest / Jest / Cypress / Go test / Maestro)
An issue tracker UI Linear / JIRA / Notion's native interface
A spec → code generator GitHub Spec Kit, AWS Kiro
An LLM Leverages your AI client (Claude / Cursor / Codex / Gemini) for the reasoning

mk-spec-master sits between your spec source and your test runner — purely about the spec ↔ test link, the coverage matrix that lives on top, and the quality coach that grades both.


Tool surface (15 tools)

Grouped by role. Each group is a layer in the spec→test→coverage→coach loop.

Meta — orientation (1)

Tool Purpose
get_spec_source_info Active adapter + all available. Call first so the AI knows whether to expect Linear / JIRA / Notion / Figma / Markdown semantics

Discovery — find and load specs (3)

Tool Purpose
list_specs Enumerate specs from the active source (filter by status / label / limit)
fetch_spec Pull a single spec's full content by id
parse_spec Heuristic AC extraction (en + zh-TW + zh-CN headings supported); accepts spec_id or raw_text. Returns _meta.ac_hash for drift detection

Generation — specs → testable artifacts (2)

Tool Purpose
extract_scenarios AC → scenarios with happy / edge / error classification (negation-aware) and best-effort Given/When/Then split
generate_test_plan One-shot fetch + parse + extract → markdown plan ready to feed to mk-qa-master.generate_test(business_context=...)

Coverage & drift — the traceability layer (4)

Tool Purpose
link_test_to_spec Record that a test verifies a spec (writes to SPEC_PROJECT_ROOT/.mk-spec-master/index.json). Stores title / source / url / ac_hash for the matrix and drift report
auto_link_tests Scan a test directory for @spec: <ID> tags and link them automatically. Python / JS / TS / Go supported. dry_run previews without writing
get_coverage_matrix Spec × test grid — answer "which specs have no tests" in one call
get_drift_report Re-fetch each linked spec, recompute ac_hash, compare. Buckets into fresh / drifted / unknown / stranded

Coach — quality + prioritization (3)

Tool Purpose
analyze_spec_quality Heuristic findings on vague language, implementation-leak AC, unclear role refs (the differentiator vs Kiro / Spec Kit)
propose_spec_improvements Take analyze output → PM-facing markdown with concrete rewrites
get_optimization_plan Three-layer prioritized plan: coverage gaps (L1) + spec-quality (L2) + process drift (L3). The "what should we fix next" tool

Knowledge — domain methodology (2)

Tool Purpose
init_spec_knowledge Create SPEC_PROJECT_ROOT/spec-knowledge.md from a starter template (EARS, INVEST, AC quality rules + TODO sections for your team's rules / actors / glossary). Idempotent
get_spec_context Read the spec-knowledge file (with built-in fallback). Optional section filter pulls one heading at a time. Call near the start of every session

Adapter status

SPEC_SOURCE Source Status Auth
markdown_local Local *.md with YAML-ish frontmatter ✅ since 0.1.0 none
github_issues GitHub Issues via gh CLI ✅ since 0.1.0 gh auth login or GITHUB_TOKEN
linear Linear API (GraphQL) ✅ since 0.2.2 LINEAR_API_KEY + SPEC_PROJECT_KEY=<team-key> (optional)
jira JIRA Cloud (REST v3, ADF → markdown) ✅ since 0.2.3 JIRA_BASE_URL + JIRA_EMAIL + JIRA_API_TOKEN + SPEC_PROJECT_KEY=<project-key> (optional)
notion Notion databases (REST v1, blocks → markdown) ✅ since 0.3.0 NOTION_TOKEN + SPEC_PROJECT_KEY=<database-id>
figma Figma file frames (TEXT nodes + comments → markdown) ✅ since 0.3.1 FIGMA_TOKEN + SPEC_PROJECT_KEY=<file-key>

Common workflows

Four patterns cover ~90% of real use. Each is one sentence to the AI client; the tools chain automatically.

1. Spec → test → run → coverage (the main loop)

"Fetch LIN-123 from Linear, extract scenarios, generate Playwright tests with mk-qa-master, run them, and update the coverage matrix."

Chains: fetch_specparse_specextract_scenariosmk-qa-master.generate_test (×N) → link_test_to_spec (×N) → mk-qa-master.run_testsget_coverage_matrix.

2. Spec health check

"Review every in-progress spec for quality issues and give me a prioritized improvement plan."

Chains: list_specs(status="in-progress")analyze_spec_qualitypropose_spec_improvementsget_optimization_plan.

3. Rebuild traceability after a refactor

"Sync the spec ↔ test index from the test source — I just renamed a bunch of files."

Chains: auto_link_testsget_coverage_matrix. Tests need @spec: <ID> docstring tags for auto-link to work; comment-above-function and docstring-inside both supported.

4. Session warmup

"Before we work on specs today: load the spec-knowledge methodology and tell me which source is active."

Chains: get_spec_source_infoget_spec_context. Cheap, sets the methodology + adapter context for everything that follows.


Sample output

get_optimization_plan markdown (excerpt)

# Optimization plan

_Coverage matrix: 23 spec(s) tracked, 4 untested._
_Spec quality: 23 spec(s) analyzed, 17 finding(s)._
_Drift: 2 drifted, 0 stranded, 5 without ac_hash._

## 🔴 Layer 1 — Coverage gaps

**Specs with zero tests** (ranked first — every business risk lives here):
- `LIN-204` — Apply promo code at checkout
- `LIN-211` — Refund flow

## 🟡 Layer 2 — Spec quality

### `LIN-098` — Checkout latency  (score: 80/100, findings: 4)
- 🟡 `ac-1`: Quantify (e.g., 'response within 200 ms')  (evidence: `fast`)
- 🔴 `ac-3`: Rewrite to describe what the user observes  (evidence: `redis`)

## 🔵 Layer 3 — Process drift

**Drifted** (spec changed since link — review affected tests):
- `LIN-123` — Apply discount at checkout · 4 test(s) potentially stale

get_coverage_matrix markdown (excerpt)

# Coverage matrix

- Specs tracked: 23
- Specs shown (min_tests=0): 23
- Specs with zero tests: 4

| Spec    | Title                          | Tests | Last status |
|---------|--------------------------------|------:|-------------|
| `LIN-204` | Apply promo code at checkout |     0 | —           |
| `LIN-123` | Apply discount at checkout   |     4 | passed      |

Install

uvx mk-spec-master    # or: pip install mk-spec-master

Add to your MCP client config:

{
  "mcpServers": {
    "mk-spec-master": {
      "command": "uvx",
      "args": ["mk-spec-master"],
      "env": {
        "SPEC_SOURCE": "markdown_local",
        "SPEC_PROJECT_ROOT": "/path/to/your/project"
      }
    }
  }
}

Then in Claude / Cursor / Codex / Gemini CLI:

"Use mk-spec-master to parse SPEC-001, extract scenarios, and hand them to mk-qa-master so we can generate Playwright tests."


Why this is missing from the ecosystem

Tool Lock-in What we do differently
AWS Kiro AWS IDE only, proprietary MCP-native, multi-client, open source
Jama Connect MCP $50k+/year, enterprise-only SMB / indie / AI-native segment
GitHub Spec Kit spec→code; runtime test coverage out of scope We add runtime test coverage
testomat.io / JIRA MCPs Single source (JIRA), SaaS lock Multi-source, file-based index, no lock

See docs/prd.md §4 for the full positioning.

Walkthrough — spec → test → coverage (long form)

Given a Linear ticket LIN-123 "Apply discount at checkout" with 4 acceptance criteria:

You: Use mk-spec-master to fetch LIN-123, extract scenarios, generate
     Playwright tests with mk-qa-master, run them, and report coverage.

The AI client chains:

mk-spec-master.fetch_spec("LIN-123")
mk-spec-master.parse_spec(spec_id="LIN-123")        → 4 AC + ac_hash
mk-spec-master.extract_scenarios(...)                → 1 happy + 3 error
mk-spec-master.generate_test_plan(spec_id="LIN-123")

for scenario in plan:
  mk-qa-master.generate_test(business_context=scenario.gherkin)
  mk-spec-master.link_test_to_spec(spec_id="LIN-123", test_node_id=..., ac_hash=...)

mk-qa-master.run_tests
mk-spec-master.get_coverage_matrix

The traceability index now records all 4 links with their AC hashes. Next sprint, when the spec changes, get_drift_report flags every test whose linked spec has moved — re-run the chain only for those.


Status

Milestone Target Status
v0.1 (MVP — markdown_local + github_issues, 7 tools) June 2026 ✅ Shipped
v0.2 (Linear, JIRA, coverage matrix, spec-quality coach, drift report) Aug 2026 ✅ Complete (0.2.3)
v0.3 (Notion, Figma, auto-link, optimization plan) Oct 2026 ✅ Complete (0.3.3)
v1.0 (production-ready, docs, integration recipes) Q4 2026

Family

  • mk-qa-master — AI 測試大師, the test-runner sibling. Tests run via mk-qa-master; coverage tracked here.
  • More mk-* MCPs in design (mk-perf-master, mk-a11y-master).

License

MIT © 2026 Jack Kao — see LICENSE (中文翻譯參考:LICENSE.zh-TW.md; the English version is authoritative).

Plain-English version: personal use, commercial use, modification, redistribution — all allowed. The only requirement is that you keep the copyright and license notice in your copy. No warranty: if it breaks something in production, you can't come after the author.

If this saved you time, a coffee goes a long way. ☕

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mk_spec_master-0.3.4.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mk_spec_master-0.3.4-py3-none-any.whl (55.9 kB view details)

Uploaded Python 3

File details

Details for the file mk_spec_master-0.3.4.tar.gz.

File metadata

  • Download URL: mk_spec_master-0.3.4.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mk_spec_master-0.3.4.tar.gz
Algorithm Hash digest
SHA256 0584f998524c196fab1db14183e084b90a7fa1767b136c5925ca637326bd6f66
MD5 6601cf0f89738b348c64a2c62f4c9e2f
BLAKE2b-256 6acfd2b71c667490412cc577b002e87bd2fdb3939b6ab76e888ca04b0e004400

See more details on using hashes here.

Provenance

The following attestation bundles were made for mk_spec_master-0.3.4.tar.gz:

Publisher: publish.yml on kao273183/mk-spec-master

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mk_spec_master-0.3.4-py3-none-any.whl.

File metadata

  • Download URL: mk_spec_master-0.3.4-py3-none-any.whl
  • Upload date:
  • Size: 55.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mk_spec_master-0.3.4-py3-none-any.whl
Algorithm Hash digest
SHA256 4ab5e28dadc069d7fbd4a943ff5f79db9ec33eae512a98657e99a287a3e97a51
MD5 2483dbd85ef9dd6111c492edd3f8d900
BLAKE2b-256 c28550be489edc119e059a4d18c28d37325de2961b1dab7c0a0099ab026d05b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for mk_spec_master-0.3.4-py3-none-any.whl:

Publisher: publish.yml on kao273183/mk-spec-master

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page