Skip to main content

MCP server for querying and modifying JSON, YAML, and TOML files using yq

Project description

JYT Logo

mcp-json-yaml-toml

A token-efficient, schema-aware MCP server for safely reading and modifying JSON, YAML, and TOML files

Getting StartedCLI UsageAvailable ToolsDevelopment

Test Publish PyPI version


Stop AI coding tools from breaking your data files. No more grep guesswork, hallucinated fields, or non-schema-compliant data added to files. This MCP server gives AI assistants a strict, round-trip safe interface for working with structured data.

The Problem

AI coding tools often destroy structured data files:

  • They grep through huge json, yaml, and toml files (like json logs, or AI transcript files) and guess at keys.
  • They hallucinate fields that never existed.
  • They use sed and regex that leave files in invalid states.
  • They break YAML indentation and TOML syntax.
  • They can't validate changes before writing.

The Solution

mcp-json-yaml-toml provides AI assistants with proper tools for structured data:

  • Token-efficient: Extract exactly what you need without loading entire files.
  • Schema validation: Enforce correctness using SchemaStore.org or custom schemas.
  • Safe modifications: Enforced validation on write; preserve comments and formatting.
  • Multi-format: JSON, YAML, and TOML through a unified interface.
  • Directive-based detection: Support for # yaml-language-server, #:schema, and $schema keys in all formats.
  • Constraint-based guided generation: Native LMQL support for proactive validation of partial inputs.
  • Local-First: All processing happens locally. No data ever leaves your machine.
  • Transparent JIT Assets: The server auto-downloads yq if missing and fetches missing schemas from SchemaStore.org for local caching.

[!NOTE]

JSONC Support: Files with .jsonc extension (JSON with Comments) are fully supported for reading, querying, and schema validation. However, write operations will strip comments due to library limitations.


Getting Started

Prerequisites

  • Python ≥ 3.11 installed.
  • An MCP-compatible client (Claude Code, Cursor, Windsurf, Gemini 2.0, n8n, etc.).

Installation

The server uses uvx for automatic dependency management and zero-config execution.

AI Agents & CLI Tools

uvx mcp-json-yaml-toml

Claude Code (CLI)

claude mcp add --scope user mcp-json-yaml-toml -- uvx mcp-json-yaml-toml

Other MCP Clients

Add this to your client's MCP configuration:

{
  "mcpServers": {
    "json-yaml-toml": {
      "command": "uvx",
      "args": ["mcp-json-yaml-toml"]
    }
  }
}

[!TIP] See docs/clients.md for detailed setup guides for Cursor, VS Code, and more.


Schema Discovery & Recognition

The server automatically identifies the correct JSON schema for your files using multiple strategies:

  1. Directives: Recognizes # yaml-language-server: $schema=... and #:schema ... directives.
  2. In-File Keys: Detects $schema keys in JSON and YAML (also supports quoted "$schema" in TOML).
  3. Local IDE Config: Discovers schemas from VS Code/Cursor extension settings and caches.
  4. SchemaStore.org: Performs glob-based auto-detection against thousands of known formats.
  5. Manual Association: Use the data_schema tool to bind a file to a specific schema URL or name.

LMQL & Guided Generation

This server provides native support for LMQL (Language Model Query Language) to enable Guided Generation. This allows AI agents to validate partial inputs (e.g., path expressions) incrementally before execution.

  • Incremental Validation: Check partial inputs (e.g., .data.us) and get the remaining pattern needed.
  • Improved Reliability: Eliminate syntax errors by guiding the LLM toward valid tool inputs.
  • Rich Feedback: Get suggestions and detailed error messages for common mistakes.

[!TIP] See the Deep Dive: LMQL Constraints for detailed usage examples.


Available Tools

Tool Description
data Get, set, or delete values at specific paths
data_query Advanced yq/jq expressions for transformations
data_schema Manage schemas and validate files
data_convert Convert between JSON, YAML, and TOML
data_merge Deep merge structured data files
constraint_validate Validate inputs against LMQL constraints
constraint_list List available generation constraints

[!NOTE] Conversion TO TOML is not supported due to yq's internal encoder limitations for complex structures.


Development

Setup

git clone https://github.com/bitflight-devops/mcp-json-yaml-toml.git
cd mcp-json-yaml-toml
uv sync

Testing

ash

Run all tests (coverage included)

uv run pytest


### Code Quality

The project uses `prek` (a Rust-based pre-commit tool) for unified linting and formatting. AI Agents MUST use the scoped verification command:

```bash
# Recommended: Verify only touched files
uv run prek run --files <file edited>

[!IMPORTANT] Avoid --all-files during feature development to keep PR diffs clean and preserve git history.


Project Structure

mcp-json-yaml-toml/
├── packages/mcp_json_yaml_toml/  # Core logic
│   ├── server.py                 # MCP implementation
│   ├── yq_wrapper.py             # Binary management
│   ├── schemas.py                # Schema validation
├── .github/                      # CI/CD and assets
├── docs/                         # Documentation
└── pyproject.toml                # Project config
# Run all tests (coverage included)
uv run pytest

Code Quality

The project uses prek (a Rust-based pre-commit tool) for unified linting and formatting. AI Agents MUST use the scoped verification command:

# Recommended: Verify only touched files
uv run prek run --files <file edited>

[!IMPORTANT] Avoid --all-files during feature development to keep PR diffs clean and preserve git history.


Project Structure

graph TD
    Repo[mcp-json-yaml-toml]
    Repo --> Packages[packages/mcp_json_yaml_toml]
    Repo --> Github[.github]
    Repo --> Docs[docs]
    Repo --> Config[pyproject.toml]

    subgraph "Core Logic"
        Packages --> Server[server.py<br/>MCP Server & Tools]
        Packages --> Schemas[schemas.py<br/>Schema Validation]
        Packages --> Constraints[lmql_constraints.py<br/>LMQL Constraints]
        Packages --> YQ[yq_wrapper.py<br/>Binary Manager]
        Packages --> YAML[yaml_optimizer.py<br/>YAML Anchors]
        Packages --> TOML[toml_utils.py<br/>TOML Utils]
        Packages --> Conf[config.py<br/>Config Manager]
    end

    style Packages fill:#f9f,stroke:#333,stroke-width:2px
    style Repo fill:#eee,stroke:#333,stroke-width:4px

Token Efficiency Experiment

Two identical Claude Code sub-agents were given the same task: read ~/.claude.json and report every MCP server listed, including command, args, and env vars.

Setup

  • Agent A — standard prompt, used the built-in Read tool
  • Agent B — same prompt with one line appended: You must use the mcp__json-yaml-toml for all file interactions.

Both agents used the sonnet model.

Prompts

Agent A prompt:

Read the file ~/.claude.json and report back:
1. Every MCP server listed in the mcpServers section
2. For each server: the command, args, and any env vars configured

Just report the raw findings. Do not summarize or interpret.

Agent B prompt:

Read the file ~/.claude.json and report back:
1. Every MCP server listed in the mcpServers section
2. For each server: the command, args, and any env vars configured

You must use the mcp__json-yaml-toml for all file interactions.

Just report the raw findings. Do not summarize or interpret.

Results

Both agents returned identical findings (8 MCP servers with correct configs).

Metric Agent A (Read tool) Agent B (mcp-json-yaml-toml)
Total tokens 37,119 28,734
Tool uses 4 2
Duration 29.3s 12.7s

Agent B used 22.6% fewer tokens and completed in 43% of the time with half the tool calls.

Why

The Read tool loads the entire file into context. ~/.claude.json is a large file — the agent had to consume all of it to find the mcpServers section. The MCP server's data_query tool extracted just the mcpServers section directly, keeping the context window small.


Built with FastMCP, yq, and LMQL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_json_yaml_toml-0.9.4.tar.gz (116.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_json_yaml_toml-0.9.4-py3-none-any.whl (142.4 kB view details)

Uploaded Python 3

File details

Details for the file mcp_json_yaml_toml-0.9.4.tar.gz.

File metadata

  • Download URL: mcp_json_yaml_toml-0.9.4.tar.gz
  • Upload date:
  • Size: 116.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for mcp_json_yaml_toml-0.9.4.tar.gz
Algorithm Hash digest
SHA256 5e2116c6738b8f7c4deb3109f017488053f436308ba25a4ea1c46a5499a77326
MD5 0837272cbfd21ab399548fc00d35a829
BLAKE2b-256 e2972b3ac62c0e561f5740cc51e9f1b519b1aa85042452a7961641c59665fc52

See more details on using hashes here.

File details

Details for the file mcp_json_yaml_toml-0.9.4-py3-none-any.whl.

File metadata

  • Download URL: mcp_json_yaml_toml-0.9.4-py3-none-any.whl
  • Upload date:
  • Size: 142.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for mcp_json_yaml_toml-0.9.4-py3-none-any.whl
Algorithm Hash digest
SHA256 4e2ac6ec54d4e20a7828513af922d8241c3f6e030a3d0b1430cf62cee3e6be7e
MD5 aa970829ae41270d15b85a535f290a76
BLAKE2b-256 ff2321bbe9a53e863d9612d5bdba8cf09bfadf7b39569f122e6d8730396aad92

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page