Skip to main content

Core execution engine for building AI agent applications — MicroAgent, AgentShell protocol, Cerebellum, and skill system

Project description

agentmatrix-core

Core execution engine for building AI agent applications.

Let LLMs think. Don't make them write JSON.

AgentMatrix separates reasoning from formatting. The large model thinks in natural language. A smaller model (Cerebellum) translates intent into executable parameters. Two models, each doing what they're best at.

Install

pip install agentmatrix-core

Requires Python 3.12+.

Architecture

┌─────────────────────────────────────────────┐
│  App Layer     Your Application             │
├─────────────────────────────────────────────┤
│  Shell Layer   AgentShell Protocol           │
│                (interface you implement)     │
├─────────────────────────────────────────────┤
│  Core Layer    MicroAgent Engine             │
│                (this package)                │
└─────────────────────────────────────────────┘
  • Core LayerMicroAgent is the execution engine. Pure reasoning loop: think, detect actions, execute, repeat. No I/O, no UI.
  • Shell LayerAgentShell is the protocol you implement to connect Core to the outside world (LLM clients, prompt templates, session storage, etc.).
  • App Layer — Your application that wires everything together.

This separation means the same core agent behavior runs anywhere — desktop, terminal, or cloud.

Quick Start

1. Implement AgentShell

AgentShell is the interface between the Core engine and your application:

from agentmatrix.core.agent_shell import AgentShell
from agentmatrix.core.micro_agent import MicroAgent

class MyShell(AgentShell):
    # Implement the required methods:
    # - get_llm_client()    → your LLM backend
    # - get_system_prompt() → prompt template
    # - get_session_store() → session persistence
    # - on_action_result()  → handle action outputs
    # - on_agent_message()  → handle agent responses
    ...

2. Create a MicroAgent and Run

agent = MicroAgent(
    name="my-agent",
    shell=my_shell,
    skills=["file","web-search"],
)

# Start the reasoning loop
await agent.run("List files in the current directory")

3. See a Working Example

A complete terminal agent (~200 lines) is available in the repository:

git clone https://github.com/webdkt/agentmatrix.git
cd tutorial/cli-agent

export OPENAI_API_KEY=sk-xxx
python main.py -m https://endpoint-url:deepseek-v4-pro

Key Modules

Module Description
core.micro_agent The execution engine — think, detect actions, execute, repeat
core.agent_shell Shell protocol — implement this for your app
core.cerebellum Intent-to-action parameter negotiation
core.action Action registry and execution
core.session_store Session persistence interface
core.signals Event-driven communication (pause, resume, stop)

Key Features

Natural Language Reasoning

The agent's "Brain" reasons entirely in natural language. No JSON output required, no format constraints. A separate "Cerebellum" translates intent into executable parameters.

Pause, Resume, Stop

Any running agent can be paused, resumed, or stopped via signals. State is preserved at safe checkpoints.

Context Auto-Compression

When conversation history grows too large, the system automatically compresses it into "Working Notes" — a dynamic state snapshot generated by the LLM. Tasks can run for hours; the context window never overflows.

Action System

Actions are detected from natural language output via <action_script> blocks. The Cerebellum negotiates parameters with the Brain, handles ambiguity, and executes.

Skill System

Built-in Python skill mixins:

  • base — Date/time utilities
  • file — File read/write, search
  • shell — Shell command execution

Extend with custom Python skills or Markdown-based procedural knowledge.

Dependencies

  • pyyaml>=6.0
  • python-dotenv>=1.0.0
  • requests>=2.31.0
  • aiohttp>=3.8.0

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentmatrix_core-0.7.0.3.tar.gz (81.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentmatrix_core-0.7.0.3-py3-none-any.whl (83.8 kB view details)

Uploaded Python 3

File details

Details for the file agentmatrix_core-0.7.0.3.tar.gz.

File metadata

  • Download URL: agentmatrix_core-0.7.0.3.tar.gz
  • Upload date:
  • Size: 81.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agentmatrix_core-0.7.0.3.tar.gz
Algorithm Hash digest
SHA256 1b7698006f20e1e4ab12075f3e7d49c702441b717eba93c08f22ad532ee3efd5
MD5 172426978767272f9b7982e62cad2bd0
BLAKE2b-256 0122df8607b6500973cd9b78c434ace5fcc942e1095270607feb7b945a8429e4

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentmatrix_core-0.7.0.3.tar.gz:

Publisher: publish.yml on webdkt/agentmatrix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agentmatrix_core-0.7.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for agentmatrix_core-0.7.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 28aacf9700dbdd2be1c2c44e441b0df908373aeb4b193db92314cda367a536e5
MD5 2203972bc92ffde81062675aa2f4170b
BLAKE2b-256 ddbb548601ad04003b985948b36f136aea208b2e82f15de8c04f854c166f7eca

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentmatrix_core-0.7.0.3-py3-none-any.whl:

Publisher: publish.yml on webdkt/agentmatrix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page