Skip to main content

MiroFish - A Simple & Universal Swarm Intelligence Engine, Predict Anything

Project description

MiroFish

A social simulation scenario engine. Feed it documents describing any scenario, and MiroFish simulates AI agents reacting on social media to explore how events might unfold. Designed for agent-driven workflows — outputs include a machine-readable verdict.json alongside the full report.

Fork of 666ghj/MiroFish — fully translated to English, CLI-only, Claude/Codex CLI support added.

What it does

  1. Feed reality seeds — PDFs, markdown, or text files (news articles, policy drafts, financial reports, anything)
  2. Describe what to predict — natural language requirement
  3. MiroFish builds a world — extracts entities and relationships into a knowledge graph, generates AI agent personas with distinct personalities
  4. Agents simulate social media — dual-platform simulation (Twitter + Reddit) where agents post, reply, like, argue, and follow each other
  5. Get a prediction report — AI analyzes all simulation data and produces a report + machine-readable verdict with confidence scores and signals

Quick start

Prerequisites

  • Python 3.11-3.12
  • uv (Python package manager)

Setup

cp .env.example .env
# Default: claude-cli (uses your Claude Code subscription)
uv sync

Run a simulation

mirofish run \
  --files docs/policy.pdf notes/context.md \
  --requirement "Predict public reaction over 30 days" \
  --json

# List prior runs (slim summary: run_id, status, created_at, artifact_count)
mirofish runs list --json

# Check run status (full manifest)
mirofish runs status <run_id> --json

# Export artifacts
mirofish runs export <run_id> --json

CLI options

mirofish run
  --files FILE [FILE ...]     Source files (pdf/md/txt) used to ground the
                              ontology and profiles
  --requirement TEXT          Plain-English simulation requirement
                              (e.g. "How would voters react to X?")
  --platform parallel|twitter|reddit   Simulation platform (default: parallel)
  --max-rounds N              Max simulation rounds (default: 10)
  --output-dir PATH           Run output directory
  --json                      Machine-readable JSON output (stdout)
  • Without --json: rich visual pipeline display on stderr (respects NO_COLOR and non-tty stdout)
  • With --json: machine-readable JSON on stdout, plain progress on stderr
  • --help / --version work without a valid .env; other commands run Config.validate() first
  • Exit code 0 = success, 1 = error (including config errors)

Run artifacts

Each run produces an immutable directory:

uploads/runs/<run_id>/
  manifest.json
  input/
    requirement.txt
    source_files/
    ontology.json
    simulation_config.json
  graph/
    graph.json
    graph_summary.json
  simulation/
    timeline.json
    top_agents.json
    actions.jsonl
    config.json
  report/
    verdict.json
    summary.json
    report.md
  visuals/
    swarm-overview.svg
    cluster-map.svg
    timeline.svg
    platform-split.svg
  logs/
    run.log

LLM providers

Set LLM_PROVIDER in .env. Only claude-cli and codex-cli are accepted; any other value (e.g. openai) is rejected at startup with a config error and exit code 1.

Provider Config Cost
claude-cli LLM_PROVIDER=claude-cli (default) Uses your Claude Code subscription
codex-cli LLM_PROVIDER=codex-cli Uses your Codex CLI subscription

Architecture

app/
    cli.py             CLI entry point (primary interface)
    cli_display.py     Rich visual pipeline display
    config.py          Environment + validation
    run_artifacts.py   Immutable run storage
    visual_snapshots.py SVG snapshot generation
    core/              Workbench session, session registry, resource loader, tasks
    resources/         Adapters for projects, documents, graph, simulations, reports
    tools/             Composable pipeline (ingest, build, prepare, run, report)
    services/
      graph_storage.py     JSON graph backend
      graph_db.py          Graph query facade
      entity_extractor.py  LLM-based extraction
      graph_builder.py     Ontology -> graph pipeline
      simulation_runner.py OASIS simulation (subprocess)
      report_agent.py      Single-pass report generation
      graph_tools.py       Search, interview, analysis
    utils/
      llm_client.py        CLI-only LLM client (claude-cli, codex-cli)
  scripts/             OASIS simulation runner scripts

Acknowledgments

  • MiroFish by 666ghj — original project
  • OASIS by CAMEL-AI — multi-agent social simulation framework

License

AGPL-3.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mirofish_cli-0.1.1-py3-none-any.whl (126.9 kB view details)

Uploaded Python 3

File details

Details for the file mirofish_cli-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: mirofish_cli-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 126.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for mirofish_cli-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 30b51349870f5904ea0ac875d87a00047f6535429f29377f32175c0a419e06a0
MD5 d988ebf83eafb7ef52c6b28ed9a1421b
BLAKE2b-256 1ce908b00c7b8f1890d1e40d4dbc05b851b9bed4f76adc5bf8cbdcc4886e3f28

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page