Skip to main content

全能自进化AI Agent - 基于Ralph Wiggum模式,永不放弃

Project description

OpenAkita Logo

OpenAkita

Self-Evolving AI Agent — Learns Autonomously, Never Gives Up

License Python Version Version PyPI Build Status

Setup CenterFeaturesQuick StartArchitectureDocumentation

中文文档


What is OpenAkita?

OpenAkita is a self-evolving AI Agent framework. It autonomously learns new skills, performs daily self-checks and repairs, accumulates experience from task execution, and never gives up when facing difficulties — persisting until the task is done.

Like the Akita dog it's named after: loyal, reliable, never quits.

  • Self-Evolving — Auto-generates skills, installs dependencies, learns from mistakes
  • Never Gives Up — Ralph Wiggum Mode: persistent execution loop until task completion
  • Growing Memory — Remembers your preferences and habits, auto-consolidates daily
  • Standards-Based — MCP and Agent Skills standard compliance for broad ecosystem compatibility
  • Multi-Platform — Setup Center GUI, CLI, Telegram, Feishu, WeCom, DingTalk, QQ

Setup Center

Setup Center

OpenAkita provides a cross-platform Setup Center desktop app (built with Tauri + React) for intuitive installation and configuration:

  • Python Environment — Auto-detect system Python or install embedded Python
  • One-Click Install — Create venv + pip install OpenAkita (PyPI / GitHub Release / local source)
  • Version Control — Choose specific versions; defaults to Setup Center version for compatibility
  • LLM Endpoint Manager — Multi-provider, multi-endpoint, failover; fetch model lists + search selector
  • Prompt Compiler Config — Dedicated fast model endpoints for instruction preprocessing
  • IM Channel Setup — Telegram, Feishu, WeCom, DingTalk, QQ — all in one place
  • Agent & Skills Config — Behavior parameters, skill toggles, MCP tool management
  • System Tray — Background residency + auto-start on boot, one-click start/stop
  • Status Monitor — Live service status dashboard with real-time log viewing

Download: GitHub Releases

Available for Windows (.exe) / macOS (.dmg) / Linux (.deb / .AppImage)


Features

Feature Description
Self-Learning & Evolution Daily self-check (04:00), memory consolidation (03:00), task retrospection, auto skill generation, auto dependency install
Ralph Wiggum Mode Never-give-up execution loop: Plan → Act → Verify → repeat until done; checkpoint recovery
Prompt Compiler Two-stage prompt architecture: fast model preprocesses instructions, compiles identity files, detects compound tasks
MCP Integration Model Context Protocol standard, stdio transport, auto server discovery, built-in web search
Skill System Agent Skills standard (SKILL.md), 8 discovery directories, GitHub install, LLM auto-generation
Plan Mode Auto-detect multi-step tasks, create execution plans, real-time progress tracking, persisted as Markdown
Multi-LLM Endpoints 9 providers, capability-based routing, priority failover, thinking mode, multimodal (text/image/video/voice)
Multi-Platform IM CLI / Telegram / Feishu (full support); WeCom / DingTalk / QQ (implemented)
Desktop Automation Windows UIAutomation + vision fallback, 9 tools: screenshot, click, type, hotkeys, window management
Multi-Agent Master-Worker architecture, ZMQ message bus, smart routing, dynamic scaling, fault recovery
Scheduled Tasks Cron / interval / one-time triggers, reminder + task types, persistent storage
Identity & Memory Four-file identity (SOUL / AGENT / USER / MEMORY), vector search, daily auto-consolidation
Tool System 11 categories, 50+ tools, 3-level progressive disclosure (catalog → detail → execute) to reduce token usage
Setup Center Tauri cross-platform desktop app, guided wizard, tray residency, status monitoring

Self-Learning & Self-Evolution

The core differentiator: OpenAkita doesn't just execute — it learns and grows autonomously.

Mechanism Trigger Behavior
Daily Self-Check Every day at 04:00 Analyze ERROR logs → LLM diagnosis → auto-fix tool errors → generate report
Memory Consolidation Every day at 03:00 Consolidate conversations → semantic dedup → extract insights → refresh MEMORY.md
Task Retrospection After long tasks (>60s) Analyze efficiency → extract lessons → store in long-term memory
Skill Auto-Generation Missing capability detected LLM generates SKILL.md + script → auto-test → register and load
Auto Dependency Install pip/npm package missing Search GitHub → install dependency → fallback to skill generation
Real-Time Memory Every conversation turn Extract preferences/rules/facts → vector storage → auto-update MEMORY.md
User Profile Learning During conversations Identify preferences and habits → update USER.md → personalized experience

Quick Start

Option 1: Setup Center (Recommended)

The easiest way — graphical guided setup, no command-line experience needed:

  1. Download the installer from GitHub Releases
  2. Install and launch Setup Center
  3. Follow the wizard: Python → Install OpenAkita → Configure LLM → Configure IM → Finish & Start

Option 2: PyPI Install

# Install
pip install openakita

# Install with all optional features
pip install openakita[all]

# Run setup wizard
openakita init

Optional extras: feishu, whisper, browser, windows

Option 3: Source Install

git clone https://github.com/openakita/openakita.git
cd openakita
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install -e ".[all]"
openakita init

Run

# Interactive CLI
openakita

# Execute a single task
openakita run "Create a Python calculator with tests"

# Service mode (IM channels)
openakita serve

# Background daemon
openakita daemon start

# Check status
openakita status

Recommended Models

Model Provider Notes
claude-sonnet-4-5-* Anthropic Default, balanced
claude-opus-4-5-* Anthropic Most capable
qwen3-max Alibaba Strong Chinese support
deepseek-v3 DeepSeek Cost-effective
kimi-k2.5 Moonshot Long-context
minimax-m2.1 MiniMax Good for dialogue

For complex tasks, enable Thinking mode by using a *-thinking model variant (e.g., claude-opus-4-5-20251101-thinking).

Basic Configuration

# .env (minimum configuration)

# LLM API (required — configure at least one)
ANTHROPIC_API_KEY=your-api-key

# Telegram (optional)
TELEGRAM_ENABLED=true
TELEGRAM_BOT_TOKEN=your-bot-token

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                          OpenAkita                               │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────────── Setup Center ────────────────────────┐   │
│  │  Tauri + React Desktop App · Install · Config · Monitor   │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│  ┌──────────────────── Identity Layer ──────────────────────┐   │
│  │  SOUL.md · AGENT.md · USER.md · MEMORY.md                │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│  ┌──────────────────── Core Layer ──────────────────────────┐   │
│  │  Brain (LLM) · Identity · Memory · Ralph Loop             │   │
│  │  Prompt Compiler · Task Monitor                           │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│  ┌──────────────────── Tool Layer ──────────────────────────┐   │
│  │  Shell · File · Web · MCP · Skills · Scheduler            │   │
│  │  Browser · Desktop · Plan · Profile · IM Channel          │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│  ┌──────────────────── Evolution Engine ────────────────────┐   │
│  │  SelfCheck · Generator · Installer · LogAnalyzer          │   │
│  │  DailyConsolidator · TaskRetrospection                    │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│  ┌──────────────────── Channel Layer ───────────────────────┐   │
│  │  CLI · Telegram · Feishu · WeCom · DingTalk · QQ          │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Core Components

Component Description
Brain Unified LLM client, multi-endpoint failover, capability routing
Identity Four-file identity system, compiled to token-efficient summaries
Memory Vector memory (ChromaDB), semantic search, daily auto-consolidation
Ralph Loop Never-give-up execution loop, StopHook interception, checkpoint recovery
Prompt Compiler Two-stage prompt architecture, fast model preprocessing
Task Monitor Execution monitoring, timeout model switching, task retrospection
Evolution Engine Self-check, skill generation, dependency install, log analysis
Skills Agent Skills standard, dynamic loading, GitHub install, auto-generation
MCP Model Context Protocol, server discovery, tool proxying
Scheduler Task scheduling, cron / interval / one-time triggers
Channels Unified message format, multi-platform IM adapters

Documentation

Document Description
Quick Start Installation and basic usage
Architecture System design and components
Configuration All configuration options
Deployment Production deployment (systemd / Docker / nohup)
MCP Integration Connecting external services
IM Channels Telegram / Feishu / DingTalk setup
Skill System Creating and using skills
Testing Testing framework and coverage

Community

Join our community for help, discussions, and updates:

WeChat Group QR Code
WeChat Group
Scan to join (Chinese)
WeChat — Chinese community chat

DiscordJoin Discord

X (Twitter)@openakita

Emailzacon365@gmail.com

Acknowledgments

License

MIT License — See LICENSE


OpenAkita — Self-Evolving AI Agent, Learns Autonomously, Never Gives Up

Project details


Release history Release notifications | RSS feed

This version

1.3.3

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openakita-1.3.3.tar.gz (6.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openakita-1.3.3-py3-none-any.whl (601.5 kB view details)

Uploaded Python 3

File details

Details for the file openakita-1.3.3.tar.gz.

File metadata

  • Download URL: openakita-1.3.3.tar.gz
  • Upload date:
  • Size: 6.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for openakita-1.3.3.tar.gz
Algorithm Hash digest
SHA256 b5963a3f5135eea095ee69bee08cffa0be42442aaca4671e8871365e3f52c659
MD5 9f8e5c453c392450974b3685d10624e8
BLAKE2b-256 0432d45d1b67e28ff7d2005818d477fcfa0faa014dd2c00e0d10c5ad85adc821

See more details on using hashes here.

File details

Details for the file openakita-1.3.3-py3-none-any.whl.

File metadata

  • Download URL: openakita-1.3.3-py3-none-any.whl
  • Upload date:
  • Size: 601.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for openakita-1.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 efebd32f7211c2cad22f62382930e1890f838658ba1fe9cba098a2e7968996f5
MD5 a9d06af8a96e64b667f5d50eef54dff3
BLAKE2b-256 02d67dba9f3af98bd1b25b779ec0b6b257ff2c323e1bae6cab74dd69f64eeb04

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page