Skip to main content

MCP server that turns any Senior-Junior workflow into an autonomous loop with a human decision maker. Seamless integration with Cursor, Claude Code, Codex, and any MCP client.

Project description

Autonomous Lab

Website PyPI License

MCP server that turns any senior-junior workflow into an autonomous loop. AI handles the execution. You make the decisions.

Autonomous Lab

Vision

The bottleneck in knowledge work has never been execution. It is judgment -- knowing which questions matter, which results are meaningful, which directions to pursue. The people best equipped to make those calls spend most of their time on tasks that don't require their specific expertise.

Autonomous Lab shifts the hierarchy up by one level. AI agents assume the working roles -- principal investigator and trainee, tech lead and developer, attending and resident -- running the full design-execute-review loop. The human moves into the editorial position: the one who curates, judges, and steers. Your taste and judgment, rather than your labor, become the primary input.

This is not a copilot. It is a reorganization of the work unit itself.

Why this exists

Autonomous Lab is an MCP server. It runs inside the coding agent you already pay for -- Cursor, Claude Code, Windsurf, Codex CLI, or any MCP-compatible client. That means:

  • No API key required. You don't need an OpenAI/Anthropic/Google key. The intelligence comes from whichever model your coding tool already uses.
  • No extra cost. Your existing Cursor Pro, Claude Max, Windsurf, or Codex subscription is all you need. You are reusing an investment you have already made.
  • No new app to learn. It plugs into your current workflow as a set of MCP tools.

Install

The easiest way: copy this page link into Claude Code, Cursor, or any coding agent and ask it to install Autonomous Lab for you. It will handle everything.

Or do it manually:

Add to your MCP client config (e.g. Cursor ~/.cursor/mcp.json):

{
  "mcpServers": {
    "autonomous-lab": {
      "command": "uvx",
      "args": ["autonomous-lab"],
      "timeout": 600,
      "env": {
        "MCP_WEB_PORT": "8766"
      }
    }
  }
}

Or if you installed via uv pip install:

{
  "mcpServers": {
    "autonomous-lab": {
      "command": "autonomous-lab",
      "timeout": 600,
      "env": {
        "MCP_WEB_PORT": "8766"
      }
    }
  }
}

Then tell your agent: "Initialize an autonomous lab project on [your topic]."

What it does

Two AI personas (senior + junior) iterate on your project in a loop. They design, execute, write, and revise. You sit above them as the decision maker: editor, code reviewer, creative director, or whatever the domain calls for.

The loop:

autolab_next → (AI acts as role) → autolab_record → lab_meeting → autolab_next → ...

When work is ready, you review it. Accept, request revisions, or reject. The loop continues until you're satisfied.

Anatomy of the monitoring interface and editorial workflow

Anatomy of the monitoring interface and editorial workflow. Top: the research loop (characters, meeting log, inventory, marketplace). Bottom: the editorial office (reviewer selection, reports, decision).

Key capabilities

  • Zero additional cost: runs on your existing coding agent subscription. No separate API keys, no usage-based billing, no new accounts.
  • Skill containers: configure characters with any combination of SKILL.md files you already have. A PI with scanpy + scientific-writing + statistical-analysis skills behaves differently from a Tech Lead with react + typescript + code-review skills.
  • 24-hour sessions: the loop runs indefinitely. No timeout, no context loss. Sessions persist across disconnects with autolab_resume.
  • Fully configurable: YAML character profiles control personality, expertise, goals, and available tools. Swap them in seconds.
  • Domain-agnostic: research, software, consulting, legal, medical, creative, or anything with a senior-junior structure.
  • Expert consultation: invite domain specialists mid-session for one-off advice without breaking the loop.
  • Verified citations: built-in CrossRef integration for real, validated references (no hallucinated papers).
  • Game-style monitoring UI: browser dashboard shows live progress, iteration history, and editorial controls.

MCP tools

Tool What it does
autolab_init Initialize a new project
autolab_resume Resume an interrupted session
autolab_next Get the next role prompt (PI or Trainee)
autolab_record Record a completed turn
autolab_status Check project state
autolab_cite Search, validate, and format citations
autolab_consult Invite a domain expert
autolab_editorial Wait for editor decision
autolab_editor_act Execute editorial decision (AI fallback)
autolab_create_character Build a character profile
lab_meeting Pause for user feedback between turns

Character example

name: Dr. Maria Chen
role: pi
title: Computational Biology PI
expertise: single-cell genomics, machine learning
goal: discover cell-type-specific regulatory programs
skills:
  - scanpy
  - scvi-tools
  - scientific-writing
  - statistical-analysis
personality:
  - "Visionary: spots novel research directions"
  - "Rigorous: demands statistical reproducibility"

Remote / SSH environments

The monitoring web UI binds to 127.0.0.1 by default (local only). On a remote server, SSH session, or container, the UI will attempt to auto-detect and bind to 0.0.0.0 instead. If auto-detection doesn't match your setup, use one of the methods below.

Method 1: Environment variable (recommended)

Set MCP_WEB_HOST to 0.0.0.0 in your MCP config:

{
  "mcpServers": {
    "autonomous-lab": {
      "command": "uvx",
      "args": ["autonomous-lab"],
      "timeout": 600,
      "env": {
        "MCP_WEB_HOST": "0.0.0.0",
        "MCP_WEB_PORT": "8766"
      }
    }
  }
}

Then open http://<remote-host-ip>:8766/lab in your local browser.

Method 2: SSH port forwarding

Keep the default config (127.0.0.1) and forward the port:

ssh -L 8766:localhost:8766 user@remote-host

Then open http://localhost:8766/lab locally.

Variable Purpose Default
MCP_WEB_HOST Bind address auto-detected (0.0.0.0 if SSH/container, else 127.0.0.1)
MCP_WEB_PORT Web UI port 8765

Requirements

  • Python >= 3.11
  • An MCP-compatible client (Cursor, Claude Code, Codex CLI, Windsurf, etc.)

Acknowledgments

Autonomous Lab builds on these open-source projects:

  • The Virtual Lab by James Zou Lab, Stanford (MIT) -- the concept of LLM agents as PI and scientists iterating through structured research meetings (Swanson et al., Nature 2025)
  • mcp-feedback-enhanced by Minidoracat (MIT) -- Web UI, feedback loop, session management, and i18n infrastructure
  • interactive-feedback-mcp by Fábio Ferreira (MIT) -- the original MCP feedback server
  • biomni by Jure Leskovec Lab, Stanford (Apache 2.0) -- optional biomedical toolkit integration

License

Apache 2.0. See LICENSE and NOTICE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autonomous_lab-0.6.0.tar.gz (788.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autonomous_lab-0.6.0-py3-none-any.whl (419.8 kB view details)

Uploaded Python 3

File details

Details for the file autonomous_lab-0.6.0.tar.gz.

File metadata

  • Download URL: autonomous_lab-0.6.0.tar.gz
  • Upload date:
  • Size: 788.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for autonomous_lab-0.6.0.tar.gz
Algorithm Hash digest
SHA256 6f46cefc19ba641cc4043cccce90d5bf1aadbe170dfa882ec92574f8bf22fb46
MD5 bb0d18337da99fd5eaf0741ac2b95723
BLAKE2b-256 983407eb5690b9bd5a3b778d553efe9c269a2c3c79e4682e851b3082d032e420

See more details on using hashes here.

File details

Details for the file autonomous_lab-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: autonomous_lab-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 419.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for autonomous_lab-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 93583ecf501aa9a5940ae4c8175290c6a91eb1c7ceeea963ddbe851862e4c2f4
MD5 5e738a8ef7c92571b6318c2fb82c851a
BLAKE2b-256 039f6772028a73cde1bc60e42e344928b31dbef6799696c6a016a47d737e9a34

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page