Skip to main content

Core execution engine for building AI agent applications — MicroAgent, AgentShell protocol, Cerebellum, and skill system

Project description

agentmatrix-core

Core execution engine for building AI agent applications.

Let LLMs think. Don't make them write JSON.

AgentMatrix separates reasoning from formatting. The large model thinks in natural language. A smaller model (Cerebellum) translates intent into executable parameters. Two models, each doing what they're best at.

Install

pip install agentmatrix-core

Requires Python 3.12+.

Architecture

┌─────────────────────────────────────────────┐
│  App Layer     Your Application             │
├─────────────────────────────────────────────┤
│  Shell Layer   AgentShell Protocol           │
│                (interface you implement)     │
├─────────────────────────────────────────────┤
│  Core Layer    MicroAgent Engine             │
│                (this package)                │
└─────────────────────────────────────────────┘
  • Core LayerMicroAgent is the execution engine. Pure reasoning loop: think, detect actions, execute, repeat. No I/O, no UI.
  • Shell LayerAgentShell is the protocol you implement to connect Core to the outside world (LLM clients, prompt templates, session storage, etc.).
  • App Layer — Your application that wires everything together.

This separation means the same core agent behavior runs anywhere — desktop, terminal, or cloud.

Quick Start

1. Implement AgentShell

AgentShell is the interface between the Core engine and your application:

from agentmatrix.core.agent_shell import AgentShell
from agentmatrix.core.micro_agent import MicroAgent

class MyShell(AgentShell):
    # Implement the required methods:
    # - get_llm_client()    → your LLM backend
    # - get_system_prompt() → prompt template
    # - get_session_store() → session persistence
    # - on_action_result()  → handle action outputs
    # - on_agent_message()  → handle agent responses
    ...

2. Create a MicroAgent and Run

agent = MicroAgent(
    name="my-agent",
    shell=my_shell,
    skills=[FileSkill(), ShellSkill()],
)

# Start the reasoning loop
await agent.run("List files in the current directory")

3. See a Working Example

A complete terminal agent (~200 lines) is available in the repository:

git clone https://github.com/webdkt/agentmatrix.git
cd tutorial/cli-agent

export OPENAI_API_KEY=sk-xxx
python main.py -m openai:gpt-4o

Key Modules

Module Description
core.micro_agent The execution engine — think, detect actions, execute, repeat
core.agent_shell Shell protocol — implement this for your app
core.cerebellum Intent-to-action parameter negotiation
core.action Action registry and execution
core.session_store Session persistence interface
core.signals Event-driven communication (pause, resume, stop)

Key Features

Natural Language Reasoning

The agent's "Brain" reasons entirely in natural language. No JSON output required, no format constraints. A separate "Cerebellum" translates intent into executable parameters.

Pause, Resume, Stop

Any running agent can be paused, resumed, or stopped via signals. State is preserved at safe checkpoints.

Context Auto-Compression

When conversation history grows too large, the system automatically compresses it into "Working Notes" — a dynamic state snapshot generated by the LLM. Tasks can run for hours; the context window never overflows.

Action System

Actions are detected from natural language output via <action_script> blocks. The Cerebellum negotiates parameters with the Brain, handles ambiguity, and executes.

Skill System

Built-in Python skill mixins:

  • base — Date/time utilities
  • file — File read/write, search
  • shell — Shell command execution

Extend with custom Python skills or Markdown-based procedural knowledge.

Dependencies

  • pyyaml>=6.0
  • python-dotenv>=1.0.0
  • requests>=2.31.0
  • aiohttp>=3.8.0

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentmatrix_core-0.7.0.1.tar.gz (76.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentmatrix_core-0.7.0.1-py3-none-any.whl (77.9 kB view details)

Uploaded Python 3

File details

Details for the file agentmatrix_core-0.7.0.1.tar.gz.

File metadata

  • Download URL: agentmatrix_core-0.7.0.1.tar.gz
  • Upload date:
  • Size: 76.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agentmatrix_core-0.7.0.1.tar.gz
Algorithm Hash digest
SHA256 feed227f81622afa7529b12a8895c0f39548ce362bf21a5871f95f65aacddb30
MD5 f7474b5a277486140afdd783dda18b0a
BLAKE2b-256 66baa226b7fd01de5bb0039372fd691f7806ae08e620275cfbdd3a8829a0ffd3

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentmatrix_core-0.7.0.1.tar.gz:

Publisher: publish.yml on webdkt/agentmatrix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agentmatrix_core-0.7.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agentmatrix_core-0.7.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ffa52b2e60337e08ed23db63020e361beedde564d6b26ff672537211eaa89f45
MD5 d562cef0afa26c7b4fcd2b45476ae4d7
BLAKE2b-256 d455fcdbb3bb1ea264d5a96577bbfc95d92b4741f698b4338a9f20c44468a840

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentmatrix_core-0.7.0.1-py3-none-any.whl:

Publisher: publish.yml on webdkt/agentmatrix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page