Self-hosted AI personal assistant with multi-channel chat, smart memory, and extensible skills.
Project description
LightClaw
Self-hosted AI personal assistant — affordable, personalized, and ready to go.
简体中文 · English
LightClaw is an open-source, self-hosted AI personal assistant built with Python. Compared to similar projects, it focuses on three key differentiators: Cost-efficient — significantly lower token consumption; Personalized — built-in multi-layer memory system that learns your preferences over time; Ready to use — 6 deeply optimized scenes out of the box, zero-config to get started.
Chat with LightClaw through Feishu, QQ, DingTalk, Discord, or the built-in Web Dashboard. All capabilities are driven by an extensible Skills system and the MCP (Model Context Protocol).
✨ Highlights
| Feature | Description | |
|---|---|---|
| 💰 | Cost-efficient | Smart multi-model routing dispatches cheap models for simple tasks, large models only when needed. Aggressive context trimming and memory compression slash token costs by an order of magnitude. |
| 🧠 | Memory System | Long-term memory extraction, user profiling, and conversation summaries — the assistant gets smarter the more you use it. |
| 🎮 | Built-in Scenes | 6 production-ready scenes: WeChat Official Account operations, stock market analysis, news tracking, low-code development, AI video generation, and smart education. |
| 🔌 | Multi-channel | Feishu · QQ · DingTalk · Discord · Web Dashboard — one assistant across all your chat platforms. |
| 🧩 | Extensible Skills | 5 built-in skills + community SkillHub marketplace + MCP protocol for unlimited tool integration. |
| 🏠 | Local-first | All data stays in ~/.lightclaw/. Supports fully offline mode with local LLMs (Ollama / LLaMA-CPP / MLX). |
🏗️ Tech Stack
| Layer | Technologies |
|---|---|
| Language | Python 3.11+ |
| Agent Framework | LangChain + LangGraph (ReAct Agent) |
| API Server | FastAPI + Uvicorn |
| Frontend | React + TypeScript + Vite + Ant Design |
| Memory | Qdrant + SQLite FTS5 + Structured Fact System |
| LLM Providers | OpenAI · DashScope (Qwen) · Ollama · LLaMA-CPP · MLX |
| Scheduling | APScheduler |
| Browser Automation | Playwright |
| MCP | Model Context Protocol (stdio + HTTP + SSE) |
| Build / Lint | setuptools · ruff · pytest · pre-commit |
🚀 Quick Start
Prerequisites
- Python 3.11 – 3.13
- uv (recommended) or pip
1. Install
# macOS / Linux — one-line installer
curl -fsSL https://finnie-1258344699.cos.ap-guangzhou.myqcloud.com/install.sh | bash
After installation, open a new terminal or reload your shell:
source ~/.zshrc # Zsh
# or
source ~/.bashrc # Bash
2. Initialize
lightclaw init
The interactive wizard walks you through: language → LLM provider → API key → model selection.
3. Run
# Start the server
lightclaw run
# Bind to a custom host/port
lightclaw run --host 0.0.0.0 --port 80
# Or register as a system service (auto-start on boot)
lightclaw start --host 0.0.0.0 --port 80
Open your browser and start chatting!
🎮 Built-in Scenes
LightClaw ships with 6 deeply optimized Scenes — each is a complete agent configuration package including a dedicated persona, specialized skills, and fine-tuned behavior rules.
| Scene | Description | Example Prompt |
|---|---|---|
| ✍️ WeChat Official Account | End-to-end content pipeline: trending topic discovery → research → AI writing → one-click publish | "Write an article about today's AI news and publish it" |
| 📈 Stock Market | Market watcher with technical analysis (MA/MACD/RSI/BOLL), powered by 1090+ akshare data APIs | "What's happening in the A-share market today?" |
| 📰 News & Trends | Real-time aggregation from 70+ platforms (Weibo, Zhihu, HN, GitHub Trending, etc.) | "Summarize today's tech news" |
| 💻 Low-code Dev | Full-stack development assistant: requirements → code generation → preview → deploy | "Build me a personal blog with dark mode" |
| 🎬 Video Production | Text-to-video pipeline: script → storyboard → image generation → video synthesis | "Turn this story into a short video" |
| 🎓 Smart Education | Personalized AI tutor: study plans, lectures, exercises, mistake tracking, spaced repetition | "Create a 30-day plan to learn linear algebra" |
🧩 Built-in Skills
Global Skills (always available)
| Skill | Description |
|---|---|
cron |
Scheduled task management — create, list, pause, resume, delete. Supports text messages and agent-powered responses across all channels. |
pdf |
Full PDF processing — extract text/tables, merge, split, rotate, watermark, encrypt, OCR, form filling. |
file-reader |
Read local text files (.txt, .md, .json, .yaml, .csv, .log, code). Auto-summarizes large files. |
skill-creator |
Create, modify, evaluate, and benchmark custom skills. |
install-skill |
Search and install community skills from SkillHub. |
Native Tools (always on)
| Tool | Description |
|---|---|
execute_shell_command |
Run shell commands |
read_file / write_file / edit_file |
File operations |
browser_use |
Playwright browser automation |
desktop_screenshot |
Desktop / window screenshot |
send_file_to_user |
Send files through the active channel |
get_current_time |
Get current system time |
⚙️ Configuration
All config lives in ~/.lightclaw/lightclaw.json. Manage it via CLI or edit directly.
# View/switch LLM models
lightclaw models
lightclaw models switch
# Install chat channels
lightclaw channels install
# Manage skills
lightclaw skills
lightclaw skills install
# Manage cron jobs
lightclaw cron list
lightclaw cron create --help
# Manage environment variables
lightclaw env
Supported LLM Providers
| Provider | Description | API Key Required |
|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5, etc. | ✅ |
| DashScope | Alibaba Qwen series | ✅ |
| Ollama | Local Ollama server | ❌ |
| LLaMA-CPP | Local GGUF model inference | ❌ |
| MLX | macOS Apple Silicon local inference | ❌ |
Supported Channels
| Channel | Credentials Needed |
|---|---|
| Feishu | App ID, App Secret |
| Bot AppID, Token | |
| DingTalk | App Key, App Secret |
| Discord | Bot Token |
| Web Dashboard | Enabled by default |
📖 CLI Reference
| Command | Description |
|---|---|
lightclaw init |
Initialize workspace at ~/.lightclaw/ |
lightclaw config |
Interactive re-configuration |
lightclaw run |
Start LightClaw (API + Dashboard + channels) |
lightclaw start |
Register & start as system service |
lightclaw stop |
Stop system service |
lightclaw restart |
Restart system service |
lightclaw channels install |
Install chat channels |
lightclaw skills install |
Install skills |
lightclaw models |
Manage LLM providers |
lightclaw cron |
Manage scheduled tasks |
lightclaw chats |
Manage conversation history |
lightclaw env |
Manage environment variables |
lightclaw clean |
Clean temp files and caches |
lightclaw uninstall |
Uninstall and clean workspace |
🖥️ Web Dashboard
After starting LightClaw, access the dashboard at http://localhost:80.
Features:
- 💬 Chat — Real-time conversation with LightClaw
- 📋 Sessions — Browse conversation history
- ⚙️ Settings — Configure providers, models, channels
- 🧩 Skills — Enable/disable skills
- ⏰ Cron — Visual cron job management
- 🔗 MCP — Manage MCP tool services
- 📊 Env — Manage environment variables
📁 Project Structure
~/.lightclaw/ ← Data directory
├── lightclaw.json # Core config
├── auth.json # Authentication
├── jobs.json # Cron job definitions
├── chats.json # Conversation metadata
├── logs/ # Runtime logs
└── workspace/ ← Agent workspace
├── .env # Persistent env vars
├── AGENTS.md # Agent behavior rules
├── SOUL.md # Core identity & principles
├── MEMORY.md # Long-term memory
├── USER.md # User profile & agent self-portrait
├── skills/ # All skills
├── memory/ # Message DB & summaries
├── sessions/ # Session state
└── uploads/ # User uploads
🏗️ Architecture
┌─────────────────────────────────────────────────────┐
│ LightClaw App │
│ │
│ Web Dashboard (port 80) ◄── REST ──► FastAPI │
│ │
│ Channel Integrations (Feishu, QQ, DingTalk, etc.) │
│ │
│ Agent Engine: ReAct Agent → Skill Hub → LLM │
│ │
│ Config / Memory / Jobs (stored at ~/.lightclaw/) │
└─────────────────────────────────────────────────────┘
User (via channel) → ChannelManager → Runner → ReAct Agent
↓
LLM Provider
↓
Skill Executor
↓
Response → Channel
🔒 Security & Privacy
- Local-first: All data (config, memory, chats, uploads) stored locally at
~/.lightclaw/— never uploaded to third-party servers. - Minimal data to LLM: Only current conversation context and trimmed memory summaries are sent to LLM providers. Config, other channel chats, and the full message DB are never sent.
- Fully offline mode: Use local models (Ollama / LLaMA-CPP / MLX) for zero API calls and complete data isolation.
- Access control: Web Dashboard supports password authentication. Credentials stored as hashes, never plaintext.
- Skill vetting: Community skills in SkillHub undergo security and compliance review before publication.
🛠️ Development
Frontend (Dashboard)
cd dashboard
npm install
npm run dev # Dev server at http://localhost:80
npm run build # Production build
npm run lint # ESLint + TypeScript check
npm run format # Prettier
Backend
# Install dev dependencies
uv sync --extra dev
# Lint & format
ruff check . --fix
ruff format .
# Run tests
pytest
pytest -xvs # Verbose, stop on first failure
# Type checking
mypy src/lightclaw/
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Ensure code passes
ruff checkandruff format - Add tests for new functionality
- Submit a Pull Request
Please read the coding standards in CLAUDE.md for detailed guidelines.
📋 Changelog
v0.0.1 (2026-03-17)
- 🎉 Initial public release — first open-source version of LightClaw
- 🎮 6 built-in scenes: WeChat Official Account, Stock Market, News & Trends, Low-code Dev, Video Production, Smart Education
- 🧠 Multi-layer memory system: long-term memory + user profiling + conversation summaries
- 💰 Smart multi-model routing for cost-efficient token consumption
- 🔌 Multi-channel support: Feishu, QQ, DingTalk, Discord, Web Dashboard
- 🧩 5 built-in skills + SkillHub marketplace + MCP protocol integration
- 🏠 Local-first architecture with full offline mode support (Ollama / LLaMA-CPP / MLX)
- 🖥️ Web Dashboard with chat, session management, settings, skill management, and cron jobs
- 📄 Added MIT License
📄 License
This project is licensed under the MIT License — see the LICENSE file for details.
Made with ❤️ by the OrcaKit team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lightclaw-0.0.45.tar.gz.
File metadata
- Download URL: lightclaw-0.0.45.tar.gz
- Upload date:
- Size: 27.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0c69d4e5e3a8bea15b5174271feaa83525daacc159186818ee4647bb89307a45
|
|
| MD5 |
c37c9a8f215757240d3daa0664f6fb69
|
|
| BLAKE2b-256 |
3ca87002f05c9166e79fd84f3dc61be9ca9bdeb998018575ae17a9e5baacdf4f
|
File details
Details for the file lightclaw-0.0.45-py3-none-any.whl.
File metadata
- Download URL: lightclaw-0.0.45-py3-none-any.whl
- Upload date:
- Size: 27.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
63f8f5c87650a37c508c11fbbf803265f630a6eaf34e0c5d6192d206af4330f5
|
|
| MD5 |
ba16fd2a1694f49cc66335a0dbbcba96
|
|
| BLAKE2b-256 |
28999467c2cd03cc624fb1e3fe2c53f31246a21d6f1221465eb143afbbf21c72
|