Skip to main content

AI-native cron task runner for per-project scheduled prompts and commands.

Project description

kage 影 - Autonomous AI Project Agent

kage hero

English | 日本語

kage is an ultra-lightweight, OS-native execution layer for AI agents. By leveraging standard schedulers like cron and launchd, it runs official AI CLIs (gemini, claude, codex, opencode, copilot, etc.) in headless mode with zero background overhead. You can install it on your work PC, define tasks in Markdown inside your project repository, and leave it running overnight. By morning, your AI agent has finished the work for you, delivering documented results while you were away.

Go to sleep. Wake up to results. — kage runs your AI agents overnight, so you start every morning with answers, not questions.

Design Philosophy

kage is built to be a thin, transparent, and resource-efficient execution layer.

  • OS Native: Does not run a persistent background daemon. It leverages cron (Linux) and launchd (macOS) to wake up, execute tasks, and exit. Zero memory footprint when idle.
  • Headless CLI Mode: Directly integrates with official AI CLIs (like gemini, claude, opencode, copilot, etc.) in their standard mode. It doesn't rely on unofficial or unstable internal APIs.
  • Stateless & Transparent: Every execution is logged, and states are managed simply via SQLite and Markdown files.

Dashboard

Execution Logs Settings & Tasks
Execution Logs Settings & Tasks

Features

  • Autonomous Agent Logic: Automatically decomposes tasks into GFM checklists and tracks progress.
  • Persistent Memory: Stores task state in .kage/memory/ to maintain context across runs.
  • Lightweight Execution: Leverages OS-native schedulers. Zero background overhead.
  • Flexible Execution: Supports AI prompt execution, shell commands, and custom scripts.
  • Advanced Workflow Controls:
    • Execution Modes: continuous, once, autostop.
    • Concurrency Policy: allow, forbid (skip if running), replace (kill old).
    • Time Windows: Restrict execution using allowed_hours: "9-17" or denied_hours: "12".
  • Markdown-First: Define tasks using simple Markdown files with YAML front matter.
  • Layered Configuration: .kage/config.local.toml > .kage/config.toml > ~/.kage/config.toml > defaults.
  • Connectors: Integrate with Discord/Slack/Telegram. Task notifications are always enabled; bi-directional chat requires poll = true (⚠️ grants channel members AI access to your PC).
  • Thinking Process Isolation: AI workers automatically wrap reasoning in <think> tags. Notifications, summaries, and cleaned outputs hide them, while kage logs keeps the raw stream available for debugging.
  • Web Dashboard: Execution history, task management, and AI chat — all in one place.

Default built-in AI providers: codex, claude, gemini, opencode, copilot, aider.

Check out the Technical Architecture for more details.

Installation

curl -sSL https://raw.githubusercontent.com/igtm/kage/main/install.sh | bash

The installer automatically runs pending install-time migrations after upgrading kage.

You can also add this repository's skills with:

npx skills add https://github.com/igtm/kage

Quick Start

cd your-project
kage init         # Initialize kage in the current directory
# Edit .kage/tasks/*.md to define your tasks
kage ui           # Open the web dashboard

Shell Completion

Typer-based completion is enabled for kage.

# Recommended: explicit shell install
kage completion install bash
kage completion install zsh

Preview or generate the script manually:

# bash
kage completion show bash > ~/.kage-complete.bash
echo 'source ~/.kage-complete.bash' >> ~/.bashrc

# zsh
kage completion show zsh > ~/.kage-complete.zsh
echo 'source ~/.kage-complete.zsh' >> ~/.zshrc

You can also use Typer's built-in option for current shell detection:

kage --install-completion

Reload your shell after installation (exec $SHELL -l).

Shell completion also suggests task names and recent run IDs for positional arguments like kage logs <task>, kage task run <name>, and kage runs show <exec_id>. kage doctor also reports whether bash/zsh completion scripts are installed.

Use Cases

🌙 Overnight Tech Evaluation (OCR Model Benchmark)

The killer use case: go to sleep, wake up with a complete technology evaluation report.

Create a single task that, on every cron run, picks the next untested OCR model, implements it, runs it against your test PDFs, and records the accuracy. By morning, you have a ranked comparison.

.kage/tasks/ocr_benchmark.md:

---
name: OCR Model Benchmark
cron: "0 * * * *"
provider: claude
mode: autostop
denied_hours: "9-23"
working_dir: ../../benchmark
---

# Task: PDF OCR Technology Evaluation

You are conducting a systematic evaluation of free/open-source OCR solutions for extracting text from Japanese financial PDF documents.

## Target Models (test one per run)
- Tesseract (jpn + jpn_vert)
- EasyOCR
- PaddleOCR
- Surya OCR
- DocTR (doctr)
- manga-ocr (for vertical text)
- Google Vision API (free tier)

## Instructions
1. Check `.kage/memory/` for which models have already been tested.
2. Pick the NEXT untested model from the list above.
3. Install it and write a test script in `benchmark/test_{model_name}.py`.
4. Run it against the PDF files in `benchmark/test_pdfs/`.
5. Measure: Character accuracy (CER), processing time, memory usage.
6. Save results to `benchmark/results/{model_name}.json`.
7. Update `benchmark/RANKING.md` with a comparison table of all tested models so far.
8. When all models are tested, set status to "Completed" in memory.

working_dir is optional. Absolute paths are used as-is; relative paths are resolved from the task file directory (.kage/tasks/).

When you wake up:

benchmark/
├── RANKING.md              ← Full comparison table, ready for decision
├── results/
│   ├── tesseract.json
│   ├── easyocr.json
│   ├── paddleocr.json
│   └── ...
└── test_pdfs/
    ├── invoice_001.pdf
    └── report_002.pdf

🔍 Overnight Codebase Audit

.kage/tasks/audit.md:

---
name: Architecture Auditor
cron: "0 2 * * *"
provider: gemini
mode: continuous
denied_hours: "9-18"
---

# Task: Nightly Architecture Health Check
Analyze the codebase for:
- Dead code and unused exports
- Circular dependencies
- API endpoints without tests
- Security anti-patterns (hardcoded secrets, SQL injection risks)

Write findings to `reports/audit_{date}.md`.

🧪 Overnight PoC Builder

.kage/tasks/poc_builder.md:

---
name: PoC Builder
cron: "30 0 * * *"
provider: claude
mode: autostop
denied_hours: "8-23"
---

# Task: Build a Proof of Concept

Read the spec in `specs/next_poc.md` and implement a working prototype.
- Create the implementation in `poc/` directory
- Include a README with setup instructions and demo commands
- Write basic tests to verify core functionality
- Set status to "Completed" when the PoC is functional

⚡ Simple Examples

AI Task — hourly health check:

---
name: Project Auditor
cron: "0 * * * *"
provider: gemini
---
Analyze the current codebase for architectural drifts.

Shell-Command Task — nightly log cleanup:

---
name: Log Cleanup
cron: "0 0 * * *"
command: "rm -rf ./logs/*.log"
shell: "bash"
---
Cleanup old logs every midnight.

Commands

Command Description
kage onboard Global setup (cron, directories, DB)
kage init Initialize kage in the current directory
kage run Execute current directory tasks once
kage runs List execution runs in grep-friendly 1-line format
kage runs show <exec_id> Show run metadata, paths, and status details
kage runs stop <exec_id> Stop a running execution
kage logs <task> Open raw logs for the latest run of a task
kage logs --run <exec_id> Open raw logs for a specific run
kage cron install Register to system scheduler
kage cron status Check background status
kage task list List all tasks with status and schedule
kage task show <name> Show detailed task configuration
kage connector list List all configured connectors
kage connector setup <type> Show setup guide for a connector (discord, slack, telegram)
kage connector poll Manually poll connectors with poll = true
kage migrate install Run pending install-time migrations manually
kage doctor Diagnose configuration health
kage skill Display agent skill guidelines
kage ui Open the web dashboard

macOS launchd Specific Settings

On macOS, kage uses launchd instead of cron. You can further customize its behavior in config.toml:

  • darwin_launchd_interval_seconds: Set the launch interval in seconds (minimum 15).
  • darwin_launchd_keep_alive: Set to true to keep the process running (not recommended for simple polling).

kage runs is the run-history view. kage logs is the raw-output viewer backed by per-run log files (stdout.log, stderr.log, events.jsonl).

Connector polling replies are recorded in the same run history. Use kage runs --source connector_poll to isolate them, then inspect the raw AI CLI output with kage logs --run <exec_id>.

Install-time migrations are discovered automatically from src/kage/migrations/install/. New migration modules added there are picked up by both kage migrate install and install.sh.

Configuration

File Scope
~/.kage/config.toml Global settings (default_ai_engine, working_dir, ui_port, ui_host, etc.)
.kage/config.toml Project-shared settings
.kage/config.local.toml Local overrides (git-ignored)
.kage/system_prompt.md Project-specific AI instructions

Provider-specific model selection can be layered in the same files:

[providers.codex]
model = "gpt-5-codex"

[providers.claude]
model = "claude-sonnet-4-5"

[providers.opencode]
model = "openai/gpt-5-codex"

Built-in providers use --model by default. You can also set nested keys via CLI:

kage config providers.codex.model gpt-5-codex --global
kage config providers.codex.model gpt-5-mini --local

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kage_ai-0.5.1.tar.gz (2.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kage_ai-0.5.1-py3-none-any.whl (88.5 kB view details)

Uploaded Python 3

File details

Details for the file kage_ai-0.5.1.tar.gz.

File metadata

  • Download URL: kage_ai-0.5.1.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kage_ai-0.5.1.tar.gz
Algorithm Hash digest
SHA256 64cc561918b8fee2bc82fa1145d9a23e12f35fe911ca95445bb266c19d5528cb
MD5 40654c79e194b8b310a695c7ba8fba6c
BLAKE2b-256 353391f19a89af617fcbaa54f004acf894e2ed31216142ba61efcedd71217143

See more details on using hashes here.

File details

Details for the file kage_ai-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: kage_ai-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 88.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kage_ai-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9a2bde4ee0e6bf11367b663b54579251b9c4816954952058623e19ddd0b8edfb
MD5 42ef6269c2594a5767a2842e17a03bd8
BLAKE2b-256 571fa44c64eb987773f751c2659c129168faac9fa2a87ffb6a7675943cb5073c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page