Skip to main content

Autonomous research agent — persistent memory, experiment tracking, hypothesis-driven proposals, GPU run lifecycle

Project description

perpetual-agent

CI PyPI npm License: MIT

Persistent research backend for AI coding agents. Hypotheses, experiment tracking, GPU run lifecycle, git-backed memory, and an autonomous research loop — accessible from any harness that speaks MCP.

Works with Claude Code, Cursor, Codex, OpenCode, and any other MCP-compatible client. One backend, many harnesses, the same persistent state.

What it gives you

  • Experiment graph — SQLite-tracked hypotheses, proposed/approved/running/done experiments, with a typed config and notes per run.
  • Run supervisor — launch GPU jobs, watchdog them, capture stdout/stderr, detect crashes, debit a GPU-hours budget.
  • Git-backed memory — markdown notes the agent reads and writes; every change is a git commit so you have a full audit trail.
  • Hypothesis-driven proposals — agent proposes experiments scored by information gain; updates confidence as evidence accrues.
  • Autonomous loop/auto mode (or the headless daemon) polls for completed runs, analyzes results, updates hypotheses, proposes next experiments.
  • Methodology config — a YAML file that controls auto-approve, batch size, support/refute thresholds, max concurrent runs.

Install

One install gets you the global CLI:

pip install perpetual-agent

Then in any project, one command sets up everything (state directory + harness integration + MCP server registration):

perpetual init --harness opencode      # default; gets the richer first-party plugin
perpetual init --harness claude-code   # Claude Code
perpetual init --harness cursor        # Cursor
perpetual init --harness codex         # Codex
perpetual init --harness mcp           # generic MCP — for any other client
perpetual init --harness all           # scaffold every supported harness at once

perpetual init is project-scoped (like a Python venv): it creates a .perpetual/ state directory in the current project, plus the harness-native config files. The MCP server (perpetual mcp) is launched on-demand by your harness and reads state from the project's .perpetual/.

Quick start

mkdir my-research && cd my-research
perpetual init -p "my-research" --harness opencode
perpetual chat

Or for Claude Code:

mkdir my-research && cd my-research
perpetual init -p "my-research" --harness claude-code
claude    # the perpetual agent + MCP server are auto-loaded

How it plugs in

perpetual init --harness <name> writes two things:

  1. Agent definition in the harness's native format (e.g. .claude/agents/perpetual.md, .cursor/rules/perpetual.md, .opencode/agents/perpetual.md).
  2. MCP server registration pointing at perpetual mcp — a stdio MCP server that exposes the perpetual tools and live research state to any MCP-compatible client.

The MCP server exposes 6 tools and a live-state resource:

Tool Purpose
perpetual_status research state (experiments by status, hypotheses, runs, budget)
perpetual_propose propose an experiment for a hypothesis
perpetual_hypotheses list / add / update hypotheses with confidence tracking
perpetual_scan sync run statuses → experiment graph
perpetual_memory read / write / list persistent research notes
perpetual_budget GPU-hours used, per-experiment breakdown

Plus live-state://research — the same markdown blob you'd see if you ran perpetual status && perpetual hypotheses list && perpetual budget. Harnesses that auto-include resources in context get the live state per turn for free.

CLI

perpetual init               initialize project (state + harness scaffolding)
perpetual mcp                run the MCP server (stdio)
perpetual status             show experiments, hypotheses, runs, budget
perpetual hypotheses         list / add / update hypotheses
perpetual propose            propose an experiment for a hypothesis
perpetual approve <exp>      approve a proposed experiment
perpetual run <exp> <cmd>    launch an approved experiment
perpetual scan               sync run statuses → experiment graph
perpetual kill <exp>         kill a running experiment
perpetual budget             GPU-hours used, per-experiment breakdown
perpetual report             generate a markdown research report
perpetual memory             show / write / list research memory
perpetual methodology        show / init research methodology config
perpetual daemon             autonomous event loop (poll → analyze → iterate)
perpetual gpu-status         nvidia-smi summary
perpetual chat               launch the TUI (opencode passthrough)
perpetual ask "<prompt>"     send a one-shot prompt to the agent
perpetual auth               manage LLM provider credentials

Daemon mode (headless, no TUI)

perpetual daemon --interval 30 --max-cycles 50

Polls for completed runs, dispatches analysis prompts to your harness, debits the GPU budget, exits when the queue empties.

OpenCode plugin (richer first-party integration)

OpenCode supports plugins natively, so we ship a TypeScript plugin (opencode-perpetual on npm) that goes beyond MCP — it injects live research state into the system prompt every turn, registers slash commands (/auto, /stop, /report, /catchup), and gives the perpetual agent a custom color in the TUI.

perpetual init --harness opencode scaffolds the plugin source into .opencode/perpetual-plugin/ and runs bun install automatically (if bun is on PATH). Or, for an existing opencode setup:

opencode plugin install opencode-perpetual

Methodology

Default config in .perpetual/methodology.yaml:

exploration_strategy: hypothesis_driven
proposal_batch_size: 3
confidence_support_threshold: 0.8
confidence_refute_threshold: 0.2
auto_approve: false
max_concurrent_runs: 4
auto_analyze: true
auto_propose: true
memory_write_on_completion: true

The agent reads this every turn and follows the policy.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perpetual_agent-0.1.0.tar.gz (37.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

perpetual_agent-0.1.0-py3-none-any.whl (48.8 kB view details)

Uploaded Python 3

File details

Details for the file perpetual_agent-0.1.0.tar.gz.

File metadata

  • Download URL: perpetual_agent-0.1.0.tar.gz
  • Upload date:
  • Size: 37.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for perpetual_agent-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a9552446b7b6f5a6e1e64707d477fa7095255b1589f9364e934024b01850378f
MD5 67c4892f3c18cbffd4344448e2760524
BLAKE2b-256 0236939ef59e5d270713f07bbf1c470a644895d37e3de37ed3ba3048c1442a69

See more details on using hashes here.

Provenance

The following attestation bundles were made for perpetual_agent-0.1.0.tar.gz:

Publisher: release.yml on nik-hz/perpetual

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file perpetual_agent-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for perpetual_agent-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4f362e425a41f226f6305d75727c14cc57d3e940b507571977380caedef6e15b
MD5 9410c884242ecfdc11aa941e04327bae
BLAKE2b-256 6f3ce499d415d2f08757f308766e744fd1801cd56b261144efa64859e6e71214

See more details on using hashes here.

Provenance

The following attestation bundles were made for perpetual_agent-0.1.0-py3-none-any.whl:

Publisher: release.yml on nik-hz/perpetual

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page