Skip to main content

JMD-based MCP server for SQLite — natural language database interface

Project description

jmd-mcp-sql

MCP server that exposes a SQLite database through three JMD tools — a natural language database interface for LLM-driven workflows.

What is JMD?

JMD (JSON Markdown) is a lightweight document format that combines Markdown headings with key: value pairs. It is designed as a structured data format that LLMs can read and write naturally — without JSON brackets or SQL syntax. A heading line sets the document type and target table; the body carries the data:

# Order
id: 42
status: shipped
total: 149.99

A prefix on the heading selects the operation: # for data, #? for queries, #! for schema, #- for deletes. See the JMD specification for the full format definition.

Tools

Tool # Data #? Query #! Schema #- Delete
read SELECT by fields SELECT with filters + aggregation PRAGMA (describe table)
write INSERT OR REPLACE CREATE / ALTER TABLE
delete DROP TABLE DELETE WHERE

All inputs and outputs are JMD documents. The LLM speaks JMD — no SQL required.

Installation

Install from PyPI:

pip install jmd-mcp-sql

Or with uv (no manual install needed — uvx fetches it on demand):

uvx jmd-mcp-sql

Alternatively, install directly from GitHub:

pip install git+https://github.com/ostermeyer/jmd-mcp-sql.git

Configuration

The server runs as a stdio-based MCP server. Without arguments it starts with the bundled Northwind demo database. Pass a path to use your own SQLite file:

jmd-mcp-sql /path/to/your.db

The demo database ships as northwind.sql (plain text, version-controlled). On the first run without an explicit path, the server creates northwind.db from that dump automatically.

Claude Code

Add the server via CLI:

claude mcp add --transport stdio sql -- uvx jmd-mcp-sql

With a custom database:

claude mcp add --transport stdio sql -- uvx jmd-mcp-sql /path/to/your.db

This writes a .mcp.json in the project root (shareable via version control). You can also create it manually:

{
  "mcpServers": {
    "sql": {
      "command": "uvx",
      "args": ["jmd-mcp-sql"]
    }
  }
}

Claude Desktop / Cowork

Claude Cowork runs inside Claude Desktop. MCP servers configured in the Desktop config are automatically available in Cowork sessions.

Edit claude_desktop_config.json:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "sql": {
      "command": "uvx",
      "args": ["jmd-mcp-sql"]
    }
  }
}

With a custom database:

{
  "mcpServers": {
    "sql": {
      "command": "uvx",
      "args": ["jmd-mcp-sql", "/path/to/your.db"]
    }
  }
}

Restart Claude Desktop after saving the file. The server will appear as a tool in both Chat and Cowork mode.

VS Code

Create .vscode/mcp.json in the project root:

{
  "servers": {
    "sql": {
      "type": "stdio",
      "command": "uvx",
      "args": ["jmd-mcp-sql"]
    }
  }
}

Alternatively, add it to your VS Code settings.json (user or workspace):

{
  "mcp": {
    "servers": {
      "sql": {
        "type": "stdio",
        "command": "uvx",
        "args": ["jmd-mcp-sql"]
      }
    }
  }
}

JMD Document Syntax

Every document starts with a heading line that sets the document type and table name, followed by key: value pairs (one per line):

# Product          → data document   (exact lookup / insert-or-replace)
#? Product         → query document  (filter / list / aggregate)
#! Product         → schema document (describe / create / drop table)
#- Product         → delete document (delete matching records)

key: value         → string, integer, or float — inferred automatically
key: true/false    → boolean

Discovering the Database

To see which tables exist, read each table's schema:

read("#! Customers")

This returns a #! document with column names, JMD types, and modifiers (readonly = primary key, optional = nullable).

Typical Workflows

List all rows (small tables only):

read("#? Orders")

Filter rows — equality:

read("#? Orders\nstatus: shipped")

Filter rows — comparison:

read("#? Orders\nFreight: > 50")

Filter rows — alternation (OR):

read("#? Orders\nShipCountry: Germany|France|UK")

Filter rows — contains (case-insensitive substring):

read("#? Customers\nCompanyName: ~Corp")

Filter rows — regex pattern:

read("#? Products\nProductName: ^Chai.*")

Filter rows — negation (composes with any operator):

read("#? Orders\nShipCountry: !Germany")
read("#? Products\nProductName: !^LEGACY.*")

Look up one record:

read("# Customers\nid: 42")

Insert or replace a record:

write("# Orders\nid: 1\nstatus: pending\ntotal: 99.90")

Create a table:

write("#! Products\nid: integer readonly\nname: string\nprice: float optional")

Delete a record:

delete("#- Orders\nid: 1")

Drop a table:

delete("#! OldTable")

Pagination

Always use pagination when querying tables that may contain many rows.

Use frontmatter fields before the #? heading to control pagination:

read("page-size: 50\npage: 1\n\n#? Orders")

The response carries pagination metadata as frontmatter — before the root heading:

total: 830
page: 1
pages: 17
page-size: 50

# Orders
## data[]
- OrderID: 10248
  ...

Count only (no rows returned):

read("count: true\n\n#? Orders")

Returns:

count: 830

# Orders

Use total and pages to determine whether to fetch more pages. For tables with fewer than ~20 rows pagination is optional.

Field Projection

Use select: frontmatter to return only specific columns. This keeps responses small and context windows focused.

read("select: OrderID, EmployeeID\npage-size: 50\n\n#? Orders")

Works with both # (data) and #? (query) documents. When combined with aggregation, select: filters the result columns after the GROUP BY.

Joins

Use join: frontmatter to query across multiple tables in one call. The value is <TableName> on <JoinColumn> (INNER JOIN, equi-join on a column that exists in both tables).

read("join: Order Details on OrderID\nsum: UnitPrice * Quantity * (1 - Discount) as revenue\ngroup: EmployeeID\nsort: revenue desc\n\n#? Orders")

Multiple joins — comma-separated in a single join: value:

join: Order Details on OrderID, Employees on EmployeeID

Expression syntax — use <expression> as <alias> in aggregate functions to compute derived values across joined columns:

sum: UnitPrice * Quantity * (1 - Discount) as revenue

The alias becomes the result column name. Without as, the default alias <func>_<field> applies (e.g. sum_Freight).

Allowed in expressions: column names, numeric literals, arithmetic operators (+, -, *, /), and standard SQL functions (SUM, AVG, ROUND, …). Subqueries and SQL keywords are not permitted.

Projection rules for join queries:

  • Unambiguous columns (appear in exactly one table) resolve automatically.
  • Join key columns always resolve to the main table.
  • Columns present in multiple tables (other than join keys) require explicit qualification — specify them via select: or filter on the unambiguous side.

Aggregation

Aggregation is expressed as frontmatter before the #? heading. QBE filter fields narrow rows before aggregation (SQL WHERE). The having: key filters after aggregation (SQL HAVING).

Key SQL Result column name
group: f1, f2 GROUP BY grouping keys pass through unchanged
sum: field SUM(field) sum_field
avg: field AVG(field) avg_field
min: field MIN(field) min_field
max: field MAX(field) max_field
count COUNT(*) count

Multiple fields per function: sum: Freight, Totalsum_Freight and sum_Total.

Frontmatter Meaning
sort: sum_revenue desc, EmployeeID asc ORDER BY (multiple columns, mixed)
having: count > 5 HAVING COUNT(*) > 5
having: sum_Freight > 1000, count > 2 HAVING … AND … (comma = AND)

having: supports: >, >=, <, <=, =. sort: references any result column — grouping keys or aggregate aliases. page-size: and page: apply to the aggregated result set.

Example — top 3 employees by revenue:

read("group: EmployeeID\nsum: revenue\nsort: sum_revenue desc\npage-size: 3\n\n#? OrderDetails")

Error Handling

All tools return a # Error document on failure:

# Error
status: 400
code: not_found
message: No records found in Orders

Check the code field to decide how to proceed.

Specification

The JMD format is documented at jmd-spec.

License

MIT License. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jmd_mcp_sql-0.4.1.tar.gz (311.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jmd_mcp_sql-0.4.1-py3-none-any.whl (304.6 kB view details)

Uploaded Python 3

File details

Details for the file jmd_mcp_sql-0.4.1.tar.gz.

File metadata

  • Download URL: jmd_mcp_sql-0.4.1.tar.gz
  • Upload date:
  • Size: 311.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jmd_mcp_sql-0.4.1.tar.gz
Algorithm Hash digest
SHA256 a85a2dd7b849db004cc0b409295c91d30e1f51dd0ddd7b129c849d38f71bdab0
MD5 abd82a1ae914edc3c6f4cbb083fb80fe
BLAKE2b-256 ac651a60d9d1199f72a6c8ba2daf1212f65dcbf315e52a39ac1c526d9e3611bd

See more details on using hashes here.

File details

Details for the file jmd_mcp_sql-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: jmd_mcp_sql-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 304.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jmd_mcp_sql-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b8f4492bfdea28f2386fc5f660504d47f68531074518775c8c1161b55f6f3775
MD5 5fcd2ec8847dec6f7a05becd80288ba4
BLAKE2b-256 df4489f6469978e066ab200c67ce853f87b92eb57eb063bcacb2f61e729bdd30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page