A lightweight personal AI assistant framework
Project description
Root Engine
Ultra-lightweight, extensible runtime for personal agents, tool use, and multi-channel automation.
What is Root Engine?
Root Engine is a compact agent runtime designed to be easy to read, easy to extend, and fast to ship. It focuses on the fundamentals: agent loop + tools + skills + memory + channels + scheduling—without burying you in frameworks.
If you want a repo you can actually understand end-to-end, modify confidently, and deploy quickly, this is the point.
Key Features
-
Ultra-Lightweight Core Small, focused agent runtime with clean boundaries between agent logic, tools, and integrations.
-
Provider-Driven LLM Support Plug in popular LLM providers (or any OpenAI-compatible endpoint) via a simple provider registry + config.
-
Tool Use + Skills System Built-in tools and a skills loader so agents can execute actions, call external capabilities, and stay modular.
-
Persistent Memory Optional long-running memory for continuity across sessions.
-
Multi-Channel Gateways Run Root Engine through chat platforms and messaging channels (where supported in this repo).
-
Scheduled Tasks (Cron) Run proactive reminders, routines, and agent jobs on a schedule.
-
MCP Support Connect external tool servers using Model Context Protocol, automatically discovered on startup.
-
Security Controls Workspace restrictions and allow-lists to reduce risk when running agents in real environments.
Architecture
At a high level:
- A CLI launches an agent or a gateway
- The agent loop runs LLM ↔ tool execution
- A provider registry resolves LLM routing
- Skills extend capabilities cleanly
- Channels handle inbound/outbound messaging
- Cron/heartbeat enable proactive behavior
Install
Requires Python 3.11+
pip install root-engine
Then run onboard to get your beta access key set up:
root-engine onboard
Quick Start
Root Engine reads configuration from: ~/.root-engine/config.json
1) Initialize
root-engine onboard
2) Configure your provider + model
Edit ~/.root-engine/config.json and set at minimum:
Provider API key (example: OpenRouter)
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
}
}
Default model
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
}
}
3) Chat
root-engine agent
Or one-shot:
root-engine agent -m "Hello!"
Chat Apps
Root Engine can run as a gateway for supported chat platforms (tokens/credentials required).
Enable a channel in ~/.root-engine/config.json, then run:
root-engine gateway
Channel Config Examples
Telegram
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
Discord
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
Slack (Socket Mode)
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"groupPolicy": "mention"
}
}
}
Configuration
Config file: ~/.root-engine/config.json
Providers
Root Engine uses a provider registry to route models and normalize configuration.
Common provider entries include:
openrouteranthropicopenaideepseekgroqgeminiminimaxdashscopemoonshotzhipuvllm(local / OpenAI-compatible)custom(any OpenAI-compatible API base)
Exact available providers depend on what's included in this repo version.
Custom Provider (Any OpenAI-compatible API)
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}
vLLM (local / OpenAI-compatible)
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}
MCP (Model Context Protocol)
Root Engine can connect to MCP tool servers and expose them as native tools.
Example config:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
}
Supported transport modes:
- Stdio:
command+args - HTTP:
url(remote endpoint)
MCP tools are discovered and registered on startup.
Security
For safer local/prod use, restrict tool access to your workspace:
{
"tools": {
"restrictToWorkspace": true
}
}
And restrict who can interact on channels:
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
CLI Reference
| Command | Description |
|---|---|
root-engine onboard |
Initialize config & workspace |
root-engine agent |
Interactive agent chat |
root-engine agent -m "..." |
One-shot message |
root-engine agent --no-markdown |
Plain-text replies |
root-engine agent --logs |
Show runtime logs |
root-engine gateway |
Start multi-channel gateway |
root-engine status |
Show runtime/config status |
root-engine channels status |
Show channel status |
root-engine cron add ... |
Add scheduled job |
root-engine cron list |
List scheduled jobs |
root-engine cron remove <id> |
Remove scheduled job |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
Scheduled Tasks (Cron)
# Add a job
root-engine cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
root-engine cron add --name "hourly" --message "Check status" --every 3600
# List jobs
root-engine cron list
# Remove a job
root-engine cron remove <job_id>
Docker
Compose
docker compose run --rm root-engine-cli onboard
vim ~/.root-engine/config.json
docker compose up -d root-engine-gateway
docker compose run --rm root-engine-cli agent -m "Hello!"
docker compose logs -f root-engine-gateway
docker compose down
Docker
docker build -t root-engine .
docker run -v ~/.root-engine:/root/.root-engine --rm root-engine onboard
vim ~/.root-engine/config.json
docker run -v ~/.root-engine:/root/.root-engine -p 18790:18790 root-engine gateway
docker run -v ~/.root-engine:/root/.root-engine --rm root-engine agent -m "Hello!"
docker run -v ~/.root-engine:/root/.root-engine --rm root-engine status
Project Structure
root_engine/
├── agent/ # Core agent logic
│ ├── loop.py # Agent loop (LLM ↔ tool execution)
│ ├── context.py # Prompt builder
│ ├── memory.py # Persistent memory
│ ├── skills.py # Skills loader
│ ├── subagent.py # Background task execution
│ └── tools/ # Built-in tools
├── skills/ # Bundled skills
├── channels/ # Chat channel integrations
├── bus/ # Message routing
├── cron/ # Scheduled tasks
├── heartbeat/ # Proactive wake-up
├── providers/ # LLM providers
├── session/ # Conversation sessions
├── config/ # Configuration schema + loader
└── cli/ # CLI commands
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file root_engine-0.1.6.tar.gz.
File metadata
- Download URL: root_engine-0.1.6.tar.gz
- Upload date:
- Size: 102.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
888fbcad7ca214f75653644d3eff7be670c6c9b7e8b8e03b9d032d0f12f4d513
|
|
| MD5 |
bebae993780ffaf5d45c4d64e7aedc35
|
|
| BLAKE2b-256 |
01405ed6576e11c7fa8d1526c8aff973751981cfcc78c3e6cdd7c78e0bfc907c
|
File details
Details for the file root_engine-0.1.6-py3-none-any.whl.
File metadata
- Download URL: root_engine-0.1.6-py3-none-any.whl
- Upload date:
- Size: 133.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b97e44ca2f381327bbbc57228c48d579587a2173fd573b3a066271bfc143fde
|
|
| MD5 |
1fe268bca5b7ee7c6d2922aa189d7d8a
|
|
| BLAKE2b-256 |
feb16cca23609a67e0e604cef72c23a59ecd43e4f0bd332bd860ba1bd19027b2
|