Skip to main content

The decision forge — evidence-graded, phase-gated, peer-reviewed decisions

Project description

research.md

PyPI CI License: MIT Python 3.11+

The decision forge. An MCP server that enforces evidence-graded, phase-gated, peer-reviewed research workflows so AI agents cannot skip rigor under time pressure.

What it enforces

Evidence gates (upstream)

Gate Trigger
2+ sources for HIGH evidence finding_create / finding_update fail if upgrading to HIGH with < 2 sources
Disconfirmation search required finding_create / finding_update fail if upgrading to HIGH without documenting what you searched to disprove the claim
Content hash for MODERATE+ Source must include content_hash: proving the agent fetched and read the material
Web research nudge Tool returns advisory when findings have no sources
Vendor-only warning Advisory when all sources are VENDOR tier
Landscape scan advisory Nudge on first candidate to document the full option landscape

Process gates (downstream)

Gate Trigger
Criteria locked before scoring candidate_score fails if decision-criteria.md not locked
No TBD on scored candidates candidate_score fails if candidate has _TBD_ claims
Peer review before scoring candidate_score fails if no evaluations/peer-review.md

Install

pip install research-md

Or install from source:

pip install -e ".[dev]"

MCP configuration

Add to your Claude Code config:

claude mcp add research-md --scope user -- research-md

Or add to .mcp.json:

{
  "mcpServers": {
    "research-md": {
      "command": "research-md"
    }
  }
}

Agent workflow

A typical research session follows this path:

project_set          Register project, get research_id
    |
finding_create       Record claims with evidence grades (UNVERIFIED -> LOW -> MODERATE -> HIGH)
    |                Tool nudges: "Use WebSearch to find sources"
finding_update       Add sources, disconfirmation search, upgrade evidence grade
    |                Gate: HIGH requires 2+ sources + disconfirmation
candidate_create     Define options to evaluate
    |                Advisory: "Document the full landscape before narrowing"
criteria_lock        Freeze decision criteria weights
    |
peer_review_log      Log reviewer assessment
    |
candidate_score      Score candidates (gated on criteria + peer review + no TBD)
    |
project_decide       Record the decision with rationale

Evidence grade ladder

Grade Meaning Requirements
UNVERIFIED Claim recorded, not yet investigated None -- tool nudges toward web research
LOW Single source or anecdotal At least a coherent argument
MODERATE Credible source, verified consultation 1+ source with content_hash: proof
HIGH Confirmed -- validated by evidence 2+ independent sources + disconfirmation search

Source quality tiers

Each source is tagged with a tier for awareness (no hard gate):

Tier Examples
PRIMARY Census data, RFC specs, user study results
EXPERT PMC papers, RAND reports, Thoughtworks Radar
SECONDARY Blog roundups, tutorials, comparison guides
VENDOR Company blog comparing itself to competitors

Trilogy conventions

research.md follows shared conventions with ike.md and visionlog. See CONVENTIONS.md for the full standard.

  • research.md -- decide with evidence (this tool)
  • visionlog -- record the decision as a contract
  • ike.md -- execute tasks within those contracts

Config lives at .research/research.json (committed to git).

Targeting pattern: project_set + research_id

Every tool call requires a research_id -- the GUID from .research/research.json. This is an in-memory mapping that does not persist across MCP server restarts.

  1. Call project_set with the project's absolute path
  2. It returns the project's research_id (a UUID)
  3. Pass that research_id on every subsequent tool call

If you call a tool without a valid research_id, the server tells you exactly how to fix it.

Project structure

Single project

my-research/
  .research/
    research.json              <- config with project GUID (commit this)
    findings/                  <- NNNN-slug.md
    candidates/                <- slug.md
    evaluations/
      decision-criteria.md     <- criteria table (lock before scoring)
      peer-review.md           <- reviewer log (required before scoring)
      scoring-matrix.md        <- generated from locked criteria + candidates

Multi-project root

research-root/
  .research/
    research.json              <- root config (lists subprojects)
  vendor-selection/
    .research/
      research.json            <- subproject GUID
      findings/
      candidates/
      evaluations/

Initialize: project_init { path, root: true } then project_init { path, subproject: "name" }. When you project_set a root, all subprojects are registered automatically.

Tools (20)

Session

Tool Description
project_set Register a project path, returns its GUID. Also registers subprojects if root.
project_get List all registered projects and their GUIDs for this session.

Project

Tool Description
project_init Initialize project structure (single, root, or subproject).
status Project health: evidence gate status, criteria locked, peer review, TBD count, finding/candidate totals.

Findings

Tool Description
finding_create Create finding with evidence grade, sources array, and disconfirmation. Nudges toward web research.
finding_list List all findings with status and evidence grade.
finding_update Update status, evidence grade, sources, disconfirmation, or claim. Gates HIGH evidence.

Candidates

Tool Description
candidate_create Create candidate for evaluation. Landscape advisory on first candidate.
candidate_list List all candidates with verdict status.
candidate_update Update verdict (provisional/recommended/eliminated) or description.
candidate_add_claim Add binary testable claim to validation checklist.
candidate_resolve_claim Mark a claim Y or N (clears _TBD_).

Scoring

Tool Description
criteria_lock Lock decision criteria weights. Required before scoring.
candidate_score Score a candidate against locked criteria. Gated on criteria lock + peer review + no TBD.
scoring_matrix_generate Generate comparison table from locked criteria + scored candidates.

Peer Review

Tool Description
peer_review_log Log reviewer name and findings. Required before scoring.

Decision

Tool Description
project_decide Record the final decision with rationale.
project_supersede Mark a decided project as superseded by new research.
research_brief Generate a layered research brief from a completed project.
research_report Generate a full untruncated research report.

Development

pip install -e ".[dev]"
pytest
ruff check .

License

MIT -- see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

research_md-0.3.0.tar.gz (55.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

research_md-0.3.0-py3-none-any.whl (24.9 kB view details)

Uploaded Python 3

File details

Details for the file research_md-0.3.0.tar.gz.

File metadata

  • Download URL: research_md-0.3.0.tar.gz
  • Upload date:
  • Size: 55.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for research_md-0.3.0.tar.gz
Algorithm Hash digest
SHA256 a2b0b1b107c1e31b0f5ad7be4d9e2745d5216c366b0710c24f4ae507556c423a
MD5 59bcc82204f892c476044b55b14b5882
BLAKE2b-256 458a52fb735dbfdfdb184b296c514172d8f1cafc4172e3b011c29610da77f243

See more details on using hashes here.

Provenance

The following attestation bundles were made for research_md-0.3.0.tar.gz:

Publisher: publish.yml on eidos-agi/research.md

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file research_md-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: research_md-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 24.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for research_md-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0ca548bc10690929c62de5306a71e77790530a89229b72fbdd27414309b9b052
MD5 2ef996074ca1977b393bba54c71a0f09
BLAKE2b-256 9e0066b1912333b05ad1d03bcab53f08043fa9da6b9875d5040563aecfca4ed8

See more details on using hashes here.

Provenance

The following attestation bundles were made for research_md-0.3.0-py3-none-any.whl:

Publisher: publish.yml on eidos-agi/research.md

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page