Skip to main content

Lightweight agent runtime library for autonomous edge agents

Project description

๐Ÿฆ™ OpenHoof v2.0

Local AI agent runtime library with FunctionGemma training

OpenHoof is a standalone, extensible library for running AI agents that persist across sessions, respond to events, and coordinate with each other. Built to work with LlamaFarm for local inference.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     YOUR APPLICATION                        โ”‚
โ”‚              (Drone Control, Medical, etc.)                 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                          โ”‚                                  โ”‚
โ”‚                    Agent Runtime                            โ”‚
โ”‚                          โ–ผ                                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                      O P E N H O O F                        โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”        โ”‚
โ”‚  โ”‚  Agent  โ”‚  โ”‚  Soul   โ”‚  โ”‚ Memory  โ”‚  โ”‚ Tools   โ”‚        โ”‚
โ”‚  โ”‚  Loop   โ”‚  โ”‚ Loading โ”‚  โ”‚ Recall  โ”‚  โ”‚Registry โ”‚        โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜        โ”‚
โ”‚       โ”‚            โ”‚            โ”‚            โ”‚              โ”‚
โ”‚       โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ”‚
โ”‚                          โ”‚                                  โ”‚
โ”‚                    LlamaFarm                                โ”‚
โ”‚                  (local inference)                          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โœจ What's New in v2.0

Complete rebuild as a standalone library (was a server framework in v1.x):

  • ๐ŸŽฏ Agent Runtime โ€” Event loop, heartbeat, exit conditions
  • ๐Ÿง  Context Files โ€” SOUL.md, MEMORY.md, USER.md, TOOLS.md as first-class citizens
  • ๐Ÿ’พ DDIL โ€” Store-and-forward for offline operation (Denied/Degraded/Intermittent/Limited networks)
  • ๐Ÿ”Œ LlamaFarm Integration โ€” Tools + prompts passed through in API calls
  • ๐Ÿ“ก Training Data Capture โ€” Every tool call logged for fine-tuning
  • ๐ŸŽ“ FunctionGemma Pipeline โ€” Auto-generate training data, fine-tune tool routing model (GOLD!)
  • ๐Ÿ“ฑ Runs Anywhere โ€” Python (now) โ†’ Kotlin (Android) โ†’ Rust (cross-platform)

๐Ÿš€ Quick Start

Installation

pip install -e git+https://github.com/llama-farm/openhoof.git@feat/microclaw-rebuild#egg=openhoof

Basic Usage

from openhoof import Agent, Soul, Memory

# Define your tools (OpenAI-compatible format)
tools = [
    {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    }
]

# Tool executor (your implementation)
def execute_tool(tool_name: str, params: dict) -> dict:
    if tool_name == "get_weather":
        return {"temp": 72, "condition": "sunny"}
    return {"error": "Unknown tool"}

# Create agent
agent = Agent(
    soul="SOUL.md",  # Your agent's identity + mission
    memory="MEMORY.md",  # Long-term recall
    tools=tools,
    executor=execute_tool,
    llamafarm_config="llamafarm.yaml",  # LlamaFarm models
    heartbeat_interval=30.0
)

# Exit conditions
agent.on_exit("timeout", lambda: time.time() - agent.start_time > 1800)

# Heartbeat callbacks
agent.on_heartbeat(lambda: print(f"๐Ÿ’“ Still alive"))

# Run agent
agent.run()

๐Ÿ“– Core Concepts

Context Files

OpenHoof agents are defined by markdown files:

my-agent/
โ”œโ”€โ”€ SOUL.md          # Identity, mission, style, constraints
โ”œโ”€โ”€ MEMORY.md        # Long-term recall, persistent context
โ”œโ”€โ”€ USER.md          # Who the agent serves, preferences
โ””โ”€โ”€ TOOLS.md         # Available capabilities, tool docs

SOUL.md example:

# SOUL.md - Weather Agent

You are a helpful weather assistant AI.

**Name:** WeatherBot
**Emoji:** โ˜€๏ธ

## Mission
Provide accurate, timely weather information to users.

## Style
- Be concise and factual
- Always include units (ยฐF, mph, etc.)
- Warn about severe weather

Tool Schema Format

OpenHoof uses OpenAI-compatible tool schemas (same format Ace uses):

{
    "name": "drone_takeoff",
    "description": "Take off and hover at specified altitude",
    "parameters": {
        "type": "object",
        "properties": {
            "alt_m": {"type": "number", "description": "Altitude in meters"},
        },
        "required": ["alt_m"]
    }
}

Heartbeat System

Agents run a heartbeat loop every N seconds:

# Check exit conditions
agent.on_exit("battery_low", lambda: agent.custom.get("battery") < 20)
agent.on_exit("timeout", lambda: runtime > 1800)

# Custom heartbeat actions
def heartbeat():
    battery = get_battery()
    agent.custom["battery"] = battery
    if battery < 30:
        print("โš ๏ธ Low battery!")

agent.on_heartbeat(heartbeat)

DDIL (Store-and-Forward)

When network is unavailable, agents buffer data locally:

# Store data when offline
agent.ddil.store("telemetry", {
    "lat": 41.8781,
    "lon": -87.6298,
    "battery": 85
})

# Sync when network returns
agent.ddil.sync_to_server("http://gateway.local")

LlamaFarm Integration

Configure models in llamafarm.yaml:

endpoint: "http://localhost:8765/v1"

models:
  router:
    model: "functiongemma:270m"  # Fast tool routing
    temperature: 0.1
  
  reasoning:
    model: "qwen2.5:8b"  # Agent reasoning
    temperature: 0.7
  
  fallback:
    model: "gpt-4o-mini"  # Cloud fallback

Use in agent:

# Reason about a situation (can trigger tool calls)
response = agent.reason("Should I continue if battery is 25%?")
print(response['content'])

๐ŸŽ“ FunctionGemma Training Pipeline (THE GOLD!)

OpenHoof includes an automated training pipeline for fine-tuning FunctionGemma on your tool calling patterns.

How It Works

  1. Data Collection โ€” Every tool call (input โ†’ tool selection โ†’ result) logged as training data
  2. Synthetic Generation โ€” Auto-generate diverse examples for each tool
  3. LoRA Fine-tuning โ€” Train FunctionGemma-270M on your tools (<300ms routing)
  4. GGUF Export โ€” Export trained model for deployment
  5. Hot-swap โ€” Update LlamaFarm with new model

Usage

# Check training data status
python -m openhoof.training.pipeline status

# Generate synthetic training data
python -m openhoof.training.pipeline generate --count 100

# Run full training pipeline
python -m openhoof.training.pipeline run

# Export data for inspection
python -m openhoof.training.pipeline export

Training Data Format

{
  "input": {
    "user_message": "Check the weather in Chicago",
    "tools": ["get_weather", "set_reminder", "search_web"]
  },
  "output": {
    "tool_calls": [
      {"name": "get_weather", "arguments": {"location": "Chicago"}}
    ]
  },
  "metadata": {
    "source": "live_usage",
    "timestamp": "2026-02-20T15:00:00"
  }
}

This is logged automatically by TrainingDataCapture on every tool call.


๐Ÿ—๏ธ Project Structure

openhoof/
โ”œโ”€โ”€ openhoof/                 # Python library
โ”‚   โ”œโ”€โ”€ agent.py              # Core Agent class
โ”‚   โ”œโ”€โ”€ soul.py               # SOUL.md loading
โ”‚   โ”œโ”€โ”€ memory.py             # MEMORY.md + semantic search
โ”‚   โ”œโ”€โ”€ heartbeat.py          # Heartbeat + exit conditions
โ”‚   โ”œโ”€โ”€ events.py             # Event queue
โ”‚   โ”œโ”€โ”€ ddil.py               # Store-and-forward buffer
โ”‚   โ”œโ”€โ”€ training.py           # Training data capture
โ”‚   โ”œโ”€โ”€ models.py             # LlamaFarm integration
โ”‚   โ”œโ”€โ”€ tools/                # Tool base classes + registry
โ”‚   โ”‚   โ”œโ”€โ”€ base.py           # Tool base class
โ”‚   โ”‚   โ”œโ”€โ”€ registry.py       # ToolRegistry
โ”‚   โ”‚   โ””โ”€โ”€ builtin/          # Built-in tools
โ”‚   โ””โ”€โ”€ tool_registry.py      # Simple registry (for basic use)
โ”œโ”€โ”€ training/                 # FunctionGemma pipeline (THE GOLD!)
โ”‚   โ”œโ”€โ”€ pipeline.py           # Training pipeline orchestration
โ”‚   โ””โ”€โ”€ train_tool_router.py # LoRA fine-tuning script
โ”œโ”€โ”€ examples/                 # Example agents
โ”œโ”€โ”€ tests/                    # Unit tests
โ””โ”€โ”€ llamafarm.yaml            # LlamaFarm config

๐Ÿ”ง Tool Schema (What Ace Uses)

OpenHoof defines a tool schema format that's 100% OpenAI-compatible. This is what Ace uses (not the library itself, just the format):

DRONE_TOOLS = [
    {
        "name": "drone_takeoff",
        "description": "Take off and hover at specified altitude",
        "parameters": {
            "type": "object",
            "properties": {
                "alt_m": {"type": "number", "default": 15.0}
            },
            "required": []
        }
    },
    {
        "name": "drone_move",
        "description": "Move relative to current position",
        "parameters": {
            "type": "object",
            "properties": {
                "north_m": {"type": "number", "default": 0.0},
                "east_m": {"type": "number", "default": 0.0},
                "up_m": {"type": "number", "default": 0.0},
                "yaw_deg": {"type": "number", "default": 0.0}
            },
            "required": []
        }
    }
]

This format works with:

  • FunctionGemma fine-tuning
  • LlamaFarm tool calling
  • OpenAI API (if using cloud fallback)

๐Ÿ“ฆ Installation & Development

# Clone the repo
git clone https://github.com/llama-farm/openhoof.git
cd openhoof

# Install in development mode
pip install -e .

# Run tests
pytest tests/

# Generate synthetic training data
python -m openhoof.training.pipeline generate --count 100

# Train FunctionGemma
python -m openhoof.training.pipeline run

๐ŸŽฏ Example: Drone Agent

from openhoof import Agent
from my_drone_tools import DRONE_TOOLS, DroneToolExecutor

agent = Agent(
    soul="SOUL.md",
    memory="MEMORY.md",
    tools=DRONE_TOOLS,
    executor=DroneToolExecutor(),
    heartbeat_interval=30.0
)

# Exit on battery low or geofence breach
agent.on_exit("battery_low", lambda: agent.custom.get("battery") < 20)
agent.on_exit("geofence", lambda: not agent.custom.get("in_bounds"))

# Sync telemetry on heartbeat
def heartbeat():
    telemetry = get_telemetry()
    agent.custom["battery"] = telemetry.battery
    agent.custom["in_bounds"] = telemetry.in_geofence
    
    # Buffer telemetry for DDIL
    agent.ddil.store("telemetry", telemetry.to_dict())

agent.on_heartbeat(heartbeat)

# Run agent
agent.run()

๐Ÿ”„ Migration from v1.x

v1.x was a server (FastAPI + WebSockets + UI)
v2.0 is a library (standalone agent runtime)

If you were using v1.x:

  • Server features โ†’ Moved to separate project (TBD)
  • Agent runtime โ†’ Now a library you import
  • Tool schemas โ†’ 100% compatible, no changes needed
  • Training pipeline โ†’ Still here, improved!

๐Ÿ“œ License

Apache 2.0


๐Ÿ™ Acknowledgments

  • LlamaFarm โ€” Local LLM inference
  • Built with โค๏ธ for anyone who needs reliable local AI agents
  • Special thanks to Ace (drone agent) for validating the architecture

Ready to build agents that kick into action? ๐Ÿฆ™

No llamas were harmed in the making of this library. Several were bedazzled.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openhoof-2.0.0.tar.gz (855.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openhoof-2.0.0-py3-none-any.whl (47.1 kB view details)

Uploaded Python 3

File details

Details for the file openhoof-2.0.0.tar.gz.

File metadata

  • Download URL: openhoof-2.0.0.tar.gz
  • Upload date:
  • Size: 855.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for openhoof-2.0.0.tar.gz
Algorithm Hash digest
SHA256 603a5a66a43c4b34f53e62c94a9830075066cf070ba2104f721b4440fd44dda0
MD5 9f66c6515ae433be05cf25b81b0d66c1
BLAKE2b-256 40d0427e07d74201ad6e1329fd70e8a38ae791e7836680a08a9e3562b29ecb86

See more details on using hashes here.

File details

Details for the file openhoof-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: openhoof-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 47.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for openhoof-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d970f6bf0f83ca4ea8551acc7619d6d9a05ef015e99889c0ae630254811723a7
MD5 77a0787ca78d49020875567ada562b83
BLAKE2b-256 2cb8ef29a0c1e85b00345417fd0bf0f2e3509dc05067314529a4c45af536a459

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page