Skip to main content

Personal AGI Operating Core

Project description

🌌 MONAD

Personal AGI Operating Core

🇨🇳 简体中文 (Chinese)How It WorksInstallationArchitecture

Python OpenAI API ReAct Agent


MONAD is not a chatbot or a simple tool-matcher. It is a self-learning, objective-driven autonomous rational agent core.

Unlike traditional agents that rely on a predefined, hardcoded set of tools, MONAD acts like a rational entity with basic "instincts". It has no memory and no pre-loaded knowledge of how to perform specific tasks (like checking the weather or searching the web).

Instead, it autonomously learns how to complete your tasks by writing and executing Python code on the fly, and then saving those successful experiences as reusable skills.


🧠 Core Philosophy

  • File System as Database: The system itself has no memory of past sessions. It persists all learned information (axioms, environment knowledge, learned skills, user context, and experiences) directly to local Markdown files. No vector databases, no RAG, zero external dependencies.
  • Absolute Rationality: MONAD follows a strict reasoning loop (Analyze → Self-check → Learn → Execute → Reflect) to accomplish goals logically.
  • Self-Learning & Self-Evolving: Instead of shipping with 100 tools, MONAD ships with only 4 basic instincts (hands 🤲, voice 🗣️, eyes 👁️, dialogue 💬). It learns everything else by generating code.
  • LLM as a Command Executor: The LLM's own training data is disregarded. All factual information must be retrieved from the real world via code execution or web perception.
  • Stateless Message Management: Every user request starts with a fresh, clean message context. MONAD doesn't rely on LLM Chat History; instead, it persists vital information via reflection loops. This ensures reasoning purity and prevents hallucination buildup from long conversations.
  • Search First, Ask Later: When stuck during execution (errors, missing packages, unfamiliar tools), MONAD's first instinct is to search the web via web_fetch, never to guess. But if the user's intent is unclear, MONAD asks the user first. In short: unclear query → ask user; execution problem → search first.

⚡ Basic Capabilities ("Instincts")

MONAD comes with only four built-in capabilities:

Capability Metaphor Description
🐍 python_exec Hands 🤲 Evaluate arbitrary Python code. Process data, call APIs, read/write files, install libraries—learn to do anything.
💻 shell Voice 🗣️ Execute shell commands on the host operating system.
👁️ web_fetch Eyes 👁️ Perceive the internet directly. Fetch web pages with 3 modes: fast (HTTP), stealth (anti-bot), browser (JS render). Powered by Scrapling.
🙋 ask_user Dialogue 💬 Ask the user for clarification when it truly cannot proceed independently.

📂 Knowledge Architecture

MONAD uses Categorized Memory instead of semantic retrieval (RAG).

knowledge/
├── axioms/          # System axioms & core behavioral principles
├── environment/     # World knowledge (e.g., search engine URLs, API endpoints)
├── user/            # Categorized user context (No RAG used here)
│   ├── facts.md     #   Objective facts & preferences (e.g., prefers Python)
│   ├── mood.md      #   Current state & mood
│   └── goals.md     #   Long-term goals & ongoing projects
├── skills/          # Auto-generated reusable Python skills
├── experiences/     # Execution logs and post-task reflections
├── protocols/       # Error handling protocols
└── tools/           # Documentation for the 4 basic capabilities

⚙️ How It Works

When you give MONAD an objective (e.g., "What is the weather in Hangzhou today?"):

  1. Analyze & Self-Check: Understand intent and check the local knowledge base for existing skills.
  2. Learn & Research (The "Search First" Principle): If the task is unknown or an error occurs, MONAD uses web_fetch to research documentation, API usage, or solutions. This is the "Learning" phase where it acquires the "how-to" knowledge before acting.
  3. Execute & Observe: MONAD writes and executes Python code or shell commands via python_exec. It treats the output as "Observations" to verify success or identify new obstacles.
  4. Reflect & Persist: After a successful execution, the Reflection module summarizes the experience. The SkillBuilder then evaluates if the logic should be abstracted into a permanent, reusable skill.
  5. Answer: Provide the final answer based on real-world data verified through execution.

💡 Deep Dive: Why Stateless?

MONAD intentionally discards traditional "Chat History" in favor of a Stateless Design, where every task starts with a clean context and persists only vital information via the file system.

  • Mitigating Hallucination: Long-running chat histories eventually lead to context pollution and attention decay. By resetting the context per task, we force the LLM to reason in a pure, noise-free environment.
  • Physical Memory: Unlike black-box model caches, MONAD's memory consists of human-readable Markdown files. This is a deliberate step towards Personal Data Sovereignty.
  • Task Atomicity: Every objective becomes an independent, reproducible unit of execution.
  • The Future of Agents: We believe the evolution of Agents will shift from "simulating conversation" to "simulating rational execution." Maintaining a living "State Whiteboard" via reflection loops is far more aligned with the essence of AGI than endlessly stacking chat logs.

🚀 Installation

1. Install via pip (Recommended)

pip install monad-core

Alternatively, install from source:

git clone https://github.com/hscspring/Monad.git
cd Monad
pip install -e .

2. Configure your LLM On your first run, MONAD will initialize its workspace in ~/.monad/. Update ~/.monad/.env with your LLM Base URL, API Key, and Model name.

Note: If you don't configure this manually, MONAD will guide you through an interactive setup with connectivity validation on your first launch.


💻 Usage

Once installed, you can start the MONAD agent from any directory in your terminal.

Start Web UI (Default)

Launch the modern browser-based interface:

monad

Interactive Terminal Mode (Classic)

Start the continuous ReAct agent loop in the CLI:

monad --cli

Feishu (Lark) Bot Mode

  1. Follow the first two steps in the Feishu Bot Guide to create a bot and obtain your APP_ID and APP_SECRET.
  2. Connect MONAD to your Feishu bot via WebSocket:
APP_ID=xxx APP_SECRET=yyy monad --feishu

Note: Requires pip install monad-core[feishu] for the lark-oapi dependency.

Self-Test

Verify all modules load correctly and the LLM connection is functioning:

monad --test

Unit Tests

Run the test suite for all tools:

python -m pytest tests/ -v

Built with pure rational reasoning 💡

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monad_core-0.2.4.tar.gz (60.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

monad_core-0.2.4-py3-none-any.whl (69.0 kB view details)

Uploaded Python 3

File details

Details for the file monad_core-0.2.4.tar.gz.

File metadata

  • Download URL: monad_core-0.2.4.tar.gz
  • Upload date:
  • Size: 60.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for monad_core-0.2.4.tar.gz
Algorithm Hash digest
SHA256 2bf7088e32044bb6d317a667218b5e7ba2ae5011d8db43bf46c6d50c181ac437
MD5 8c034fc634c2254ac832af3aa0457999
BLAKE2b-256 f1683a5005948ca05edfc3c4bfb79a7135a38feb87c52ab957cf2c8a29d6166f

See more details on using hashes here.

File details

Details for the file monad_core-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: monad_core-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 69.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for monad_core-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 73e70b1bb03d4368b06354b62b87162c154e572d3187acbe2da2ce32c70b46e5
MD5 fd189f6a6c0c02279dbebff5b40b1590
BLAKE2b-256 b22dd80339e586a329c35cfc34b6accd096519620e2c196b69f62b2ab72ef535

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page