Skip to main content

A lightweight personal AI assistant framework

Project description

Mira

Tests codecov

An open-source, ultra-lightweight AI assistant tailored specifically for Medical AI Research.

Powered by an underlying micro-agent framework, Mira is designed to execute complex medical imaging pipelines, from raw DICOM data processing to deep learning tasks, traditional radiomics, and survival analysis.

🔬 Built-in Medical Skills

Mira comes pre-loaded with specialized medical skills:

  1. medical-image-dl-pipeline: End-to-end deep learning pipeline (classification, segmentation, detection) built on MONAI and PyTorch. Features robust 5-Fold Cross-Validation and early stopping.
  2. radiomics: High-dimensional radiomic feature extraction using PyRadiomics, combined with LASSO/mRMR feature selection.
  3. survival-analysis: Time-to-event statistical modeling, Kaplan-Meier curves, and Cox Proportional Hazards models via lifelines.

Mira can also be leveraged for comprehensive literature reviews and academic manuscript writing.

🛡️ Core Agent Features

Mira goes beyond standard AI wrappers by implementing a robust, production-ready agent architecture:

  • Intelligent Model Routing: Dynamically routes sub-tasks, agent reasoning, and tool calls to the most appropriate AI models based on task complexity and context, ensuring optimal performance and cost-efficiency.
  • Strict Workspace Sandboxing (Read/Write Separation): The agent operates within a highly secure, confined workspace directory. Built-in filesystem and shell execution guards actively block path traversals (e.g., cd .., ../) and unauthorized updates to external paths, guaranteeing the safety of the host system. Crucially, it employs a sophisticated Read/Write separation model—allowing the agent securely to read system-level built-in skills without permitting any unauthorized edits to framework source code.

🚀 Quick Start

1. Install

git clone https://github.com/MIRA-Intelligence/mira.git
cd Mira
pip install -e .

2. Configure Run mira onboard to initialize the config.json and your workspace (defaults to ~/.mira).

mira onboard

Then, configure your model settings and API keys in ~/.mira/config.json:

{
  "agents": {
    "defaults": {
      "workspace": "~/.mira/",
      "model": "",
      "provider": "custom",
      "maxTokens": 8192,
      "temperature": 0.6,
      "maxToolIterations": 40,
      "memoryWindow": 100,
      "reasoningEffort": null
    }
  },
  "providers": {
    "custom": {
      "apiKey": "",
      "apiBase": null,
      "extraHeaders": null
    },
    "azureOpenai": {
      "apiKey": "",
      "apiBase": null,
      "extraHeaders": null
    },
    "anthropic": {
      "apiKey": "",
      "apiBase": null,
      "extraHeaders": null
    }
  }
}

💻 CLI Commands Reference

Mira provides a comprehensive CLI for managing your sessions and configurations:

  • mira onboard Initialize your configuration file and local workspace directory (~/.mira by default). This is the first command you should run after installation.

  • mira agent Start an interactive AI chat session against the general-purpose agent loop (no auto-mode, no agent profiles, no task-plan contracts — closest to the upstream nanobot baseline). You can optionally pass a prompt instantly via the -m flag:

    mira agent -m "Summarise the README and list the top 3 todos."
    
  • mira research Start an interactive session against the research-flavoured agent loop powering the desktop UI. Adds auto-mode while-loops, agent profiles (which AGENTS_*.md to bootstrap), automation stop policies (token / experiment budgets), and task-plan guardrails. Use this for the kind of multi-experiment workflows the desktop app drives:

    mira research \
      --message "I have 77 MRI Dixon cases. Please set up a 3D classification pipeline." \
      --mode auto \
      --profile research \
      --max-tokens 200000 \
      --max-experiments 8 \
      --project-dir ~/projects/dixon-mri
    

    Available flags:

    • --mode / -mmanual or auto. auto only triggers the auto-continue while-loop when running through the web channel (i.e. via mira gateway
      • the desktop UI); CLI sessions still honour the flag for cached state but won't drive multi-round orchestration.
    • --profile / -pdefault | engineer | research (chooses AGENTS.md / AGENTS_EG.md / AGENTS_RS.md).
    • --max-tokens / --max-experiments — automation stop thresholds.
    • --project-dir — forwarded as metadata.project_dir so guardrails and task_plan.json lookups resolve correctly.

    Both mira agent and mira research are thin wrappers around the same chat REPL; the only difference is which loop class (BaseAgentLoop vs ResearchAgentLoop) drives _process_message. mira gateway keeps using ResearchAgentLoop to match the desktop UI.

  • mira status Check the current status of your Mira configuration, agent defaults, and workspace environment.

  • OAuth providers (e.g., openai-codex, github-copilot) are now configured directly inside mira onboard.

  • mira gateway Launch the background gateway service. This enables external API endpoints and multi-channel traffic.

Local Engine Service CLI

For desktop/local deployment workflows, use mira-engine:

mira-engine install-service
mira-engine start
mira-engine status
mira-engine logs
mira-engine doctor
mira-engine doctor --export
mira-engine upgrade --package mira
mira-engine stop
mira-engine uninstall-service

On macOS, install-service registers a user LaunchAgent at:

~/Library/LaunchAgents/com.projectmira.engine.plist

On Linux, install-service registers a user systemd unit:

~/.config/systemd/user/mira-engine.service

On Windows, install-service registers service name:

MiraEngine

Local engine logs and diagnostics:

  • Logs: ~/.mira/logs/agent-service.log (+ rotated files)
  • Diagnostics bundles: ~/.mira/runtime/diagnostics/

🔗 Release Compatibility Mapping

Mira tracks UI/Agent release compatibility in compatibility.json.

  • release_train: release window in YYYY.MM format
  • ui: supported UI minor range (e.g. 0.1.x)
  • agent: supported agent minor range (e.g. 0.1.x)
  • api_contract: API contract version (e.g. v1)
  • min_agent_for_ui: minimum compatible agent patch version

Validate updates locally before opening a PR:

python scripts/validate_compatibility.py --file compatibility.json

📦 Agent Release Pipeline

Tagging v* triggers .github/workflows/agent-release.yml to:

  • build/test the project on Linux/macOS/Windows
  • publish mira package artifacts (wheel/sdist)
  • build standalone mira-engine executables with checksums

Use .github/workflows/release-train.yml (workflow_dispatch) to validate an agent_tag + ui_tag pair and run smoke checks before announcing a combined release.

🏗️ Optional Self-hosted Path

Docker-related files are in deploy/:

  • deploy/docker-compose.yml
  • deploy/Dockerfile
  • deploy/entrypoint.sh
  • deploy/.env.example

Compose services include:

  • local build/run services: mira-gateway, mira-api, mira-cli
  • self-hosted release services (profile self-hosted): mira-engine, mira-ui

Operator guide:

  • docs/self-hosted-docker.md

💬 Multi-Channel Deployment (Coming Soon)

Features to deploy Mira seamlessly to platforms like Telegram, Discord, Feishu, or Slack to assist your research team in real-time are in active development.

🤝 Contributing / CLA

All external contributions require acceptance of the Contributor License Agreement. See CLA.md for details. By submitting a PR, you confirm acceptance of this CLA.

🙏 Acknowledgments

The foundational CLI framework of Mira is built heavily upon the mira. We sincerely thank the HKUDS team for their excellent open-source contribution to the community.


Developed for researchers, by ECNU SKMR Lab.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mira_engine-0.3.0rc1.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mira_engine-0.3.0rc1-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file mira_engine-0.3.0rc1.tar.gz.

File metadata

  • Download URL: mira_engine-0.3.0rc1.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mira_engine-0.3.0rc1.tar.gz
Algorithm Hash digest
SHA256 94b6a1e4e31817bdb4a6e45b56481d2a862d0fb9e50ae0e60a1a6e538237f3d7
MD5 4b5d1ed7443b2904ad33efb2fd85e23e
BLAKE2b-256 7e74faded7c49097a8448bc8b8ecb1e17640a70fbdfed6911631dfe05db9550d

See more details on using hashes here.

Provenance

The following attestation bundles were made for mira_engine-0.3.0rc1.tar.gz:

Publisher: agent-release.yml on MIRA-Intelligence/mira

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mira_engine-0.3.0rc1-py3-none-any.whl.

File metadata

  • Download URL: mira_engine-0.3.0rc1-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mira_engine-0.3.0rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 9614853dbdae66b4ce18532e2005fdf6ad82abda9aae501c7290e49b0928daa1
MD5 46168dd7de1dc6409ebea80ef36d589b
BLAKE2b-256 25bf0f680f62035be7cd079032a666169c1e41ee27d143a7f39ab9cfb7fcb622

See more details on using hashes here.

Provenance

The following attestation bundles were made for mira_engine-0.3.0rc1-py3-none-any.whl:

Publisher: agent-release.yml on MIRA-Intelligence/mira

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page