Skip to main content

AI-native cron task runner for per-project scheduled prompts and commands.

Project description

kage 影 - Autonomous AI Project Agent

kage hero

English | 日本語

kage is an ultra-lightweight, OS-native execution layer for AI agents. By leveraging standard schedulers like cron and launchd, it runs official AI CLIs (gemini, claude, codex, opencode, copilot, etc.) in headless mode with zero background overhead. You can install it on your work PC, define tasks in Markdown inside your project repository, and leave it running overnight. By morning, your AI agent has finished the work for you, delivering documented results while you were away.

Go to sleep. Wake up to results. — kage runs your AI agents overnight, so you start every morning with answers, not questions.

Design Philosophy

kage is built to be a thin, transparent, and resource-efficient execution layer.

  • OS Native: Does not run a persistent background daemon. It leverages cron (Linux) and launchd (macOS) to wake up, execute tasks, and exit. Zero memory footprint when idle.
  • Headless CLI Mode: Directly integrates with official AI CLIs (like gemini, claude, opencode, copilot, etc.) in their standard mode. It doesn't rely on unofficial or unstable internal APIs.
  • Stateless & Transparent: Every execution is logged, and states are managed simply via SQLite and Markdown files.

Dashboard

Execution Logs Settings & Tasks
Execution Logs Settings & Tasks

Features

  • Autonomous Agent Logic: Automatically decomposes tasks into GFM checklists and tracks progress.
  • Persistent Memory: Stores task state in .kage/memory/ to maintain context across runs.
  • Lightweight Execution: Leverages OS-native schedulers. Zero background overhead.
  • Flexible Execution: Supports AI prompt execution, shell commands, and custom scripts.
  • Advanced Workflow Controls:
    • Execution Modes: continuous, once, autostop.
    • Concurrency Policy: allow, forbid (skip if running), replace (kill old).
    • Time Windows: Restrict execution using allowed_hours: "9-17" or denied_hours: "12".
  • Markdown-First: Define tasks using simple Markdown files with YAML front matter.
  • Layered Configuration: .kage/config.local.toml > .kage/config.toml > ~/.kage/config.toml > defaults.
  • Connectors: Bi-directional integration with external services like Discord and Slack.
  • Web Dashboard: Execution history, task management, and AI chat — all in one place.

Check out the Technical Architecture for more details.

Installation

curl -sSL https://raw.githubusercontent.com/igtm/kage/main/install.sh | bash

Quick Start

cd your-project
kage init         # Initialize kage in the current directory
# Edit .kage/tasks/*.md to define your tasks
kage ui           # Open the web dashboard

Use Cases

🌙 Overnight Tech Evaluation (OCR Model Benchmark)

The killer use case: go to sleep, wake up with a complete technology evaluation report.

Create a single task that, on every cron run, picks the next untested OCR model, implements it, runs it against your test PDFs, and records the accuracy. By morning, you have a ranked comparison.

.kage/tasks/ocr_benchmark.md:

---
name: OCR Model Benchmark
cron: "0 * * * *"
provider: claude
mode: autostop
denied_hours: "9-23"
---

# Task: PDF OCR Technology Evaluation

You are conducting a systematic evaluation of free/open-source OCR solutions for extracting text from Japanese financial PDF documents.

## Target Models (test one per run)
- Tesseract (jpn + jpn_vert)
- EasyOCR
- PaddleOCR
- Surya OCR
- DocTR (doctr)
- manga-ocr (for vertical text)
- Google Vision API (free tier)

## Instructions
1. Check `.kage/memory/` for which models have already been tested.
2. Pick the NEXT untested model from the list above.
3. Install it and write a test script in `benchmark/test_{model_name}.py`.
4. Run it against the PDF files in `benchmark/test_pdfs/`.
5. Measure: Character accuracy (CER), processing time, memory usage.
6. Save results to `benchmark/results/{model_name}.json`.
7. Update `benchmark/RANKING.md` with a comparison table of all tested models so far.
8. When all models are tested, set status to "Completed" in memory.

When you wake up:

benchmark/
├── RANKING.md              ← Full comparison table, ready for decision
├── results/
│   ├── tesseract.json
│   ├── easyocr.json
│   ├── paddleocr.json
│   └── ...
└── test_pdfs/
    ├── invoice_001.pdf
    └── report_002.pdf

🔍 Overnight Codebase Audit

.kage/tasks/audit.md:

---
name: Architecture Auditor
cron: "0 2 * * *"
provider: gemini
mode: continuous
denied_hours: "9-18"
---

# Task: Nightly Architecture Health Check
Analyze the codebase for:
- Dead code and unused exports
- Circular dependencies
- API endpoints without tests
- Security anti-patterns (hardcoded secrets, SQL injection risks)

Write findings to `reports/audit_{date}.md`.

🧪 Overnight PoC Builder

.kage/tasks/poc_builder.md:

---
name: PoC Builder
cron: "30 0 * * *"
provider: claude
mode: autostop
denied_hours: "8-23"
---

# Task: Build a Proof of Concept

Read the spec in `specs/next_poc.md` and implement a working prototype.
- Create the implementation in `poc/` directory
- Include a README with setup instructions and demo commands
- Write basic tests to verify core functionality
- Set status to "Completed" when the PoC is functional

⚡ Simple Examples

AI Task — hourly health check:

---
name: Project Auditor
cron: "0 * * * *"
provider: gemini
---
Analyze the current codebase for architectural drifts.

Shell-Command Task — nightly log cleanup:

---
name: Log Cleanup
cron: "0 0 * * *"
command: "rm -rf ./logs/*.log"
shell: "bash"
---
Cleanup old logs every midnight.

Commands

Command Description
kage onboard Global setup (cron, directories, DB)
kage init Initialize kage in the current directory
kage run Execute current directory tasks once
kage cron install Register to system scheduler
kage cron status Check background status

macOS launchd Specific Settings

On macOS, kage uses launchd instead of cron. You can further customize its behavior in config.toml:

  • darwin_launchd_interval_seconds: Set the launch interval in seconds (minimum 15).
  • darwin_launchd_keep_alive: Set to true to keep the process running (not recommended for simple polling). | kage task list | List all tasks with status and schedule | | kage task show <name> | Show detailed task configuration | | kage connector list | List all configured connectors | | kage connector setup <type> | Show setup guide for a connector (discord, slack) | | kage connector poll | Manually poll all active connectors | | kage doctor | Diagnose configuration health | | kage skill | Display agent skill guidelines | | kage ui | Open the web dashboard |

Configuration

File Scope
~/.kage/config.toml Global settings
.kage/config.toml Project-shared settings
.kage/config.local.toml Local overrides (git-ignored)
.kage/system_prompt.md Project-specific AI instructions

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kage_ai-0.2.0.tar.gz (2.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kage_ai-0.2.0-py3-none-any.whl (63.2 kB view details)

Uploaded Python 3

File details

Details for the file kage_ai-0.2.0.tar.gz.

File metadata

  • Download URL: kage_ai-0.2.0.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.9

File hashes

Hashes for kage_ai-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e6034d332510dec7ca19cb2ac6c17cd2fdd0ddbabdade50a67d93cfa849091fa
MD5 61230d1faf8ae351ba4855d742ad05c8
BLAKE2b-256 01103c22a806560eba12e749972dc9d440b92a222fd046a6c7b605c7749a02fa

See more details on using hashes here.

File details

Details for the file kage_ai-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: kage_ai-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 63.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.9

File hashes

Hashes for kage_ai-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a55346052c1f8f2dde79f89ea68a974960846b0ccb958e13cb7bd6f9036adf2d
MD5 10d1ccf0146518013a7eb8937e1dc9f4
BLAKE2b-256 d628c584291b1f1ce14bc1ae626d9f3278bc83b1c01103ec3b3e156562da72e4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page