AI assistant framework with multi-agent orchestration, background tasks, and persistent memory.
Project description
Sygen
AI assistant framework with multi-agent orchestration, background tasks, and persistent memory.
Telegram-first personal AI agent that runs CLI tools (Claude Code, Codex, Gemini) and manages complex workflows autonomously.
Features
Core
- Multi-agent system — supervisor + sub-agents, each with own bot and workspace
- Background task delegation — offload long work to autonomous agents, keep chatting, get results back
- Persistent memory — modular memory system with Always Load / On Demand separation
- Named sessions — multiple isolated conversation contexts per chat
- Inter-agent communication — sync and async messaging between agents with shared knowledge base
Transports & Providers
- Telegram (primary) + Matrix support
- Claude Code, Codex CLI, Gemini CLI — pluggable AI backends
- Streaming output — real-time response delivery with configurable buffering
MCP (Model Context Protocol)
- Native MCP client — connects to any MCP server, discovers tools, routes calls
- 3000+ integrations — GitHub, Google Drive, Slack, Docker, databases, and more
- Auto-lifecycle — starts servers on boot, health checks every 30s, auto-restart on crash
- Hot-reload — add/remove servers without restarting the bot
/mcpcommand — list servers, check status, refresh tools from Telegram
Skill Marketplace (ClawHub)
- 13,000+ community skills — search and install from OpenClaw's ClawHub registry
- Security scanning — static analysis (20 suspicious patterns) + VirusTotal API before every install
- User always decides — full security report shown, install only on explicit confirmation
- Zero dependencies — no npm/OpenClaw required, pure HTTP API integration
/skillcommand — search, install, list, remove from Telegram
Automation
- Cron scheduler — recurring tasks with timezone support
- Webhook server — HTTP endpoints that trigger agent actions
- Docker sandbox — optional secure execution for untrusted code
- Silent output —
[SILENT]marker lets cron/webhook tasks suppress delivery when nothing to report
Observability
- Execution traces — every cron, task, and webhook run is logged to SQLite (
traces.db) /logscommand — view recent traces, filter by type (/logs cron), errors (/logs errors), or name- Auto-rotation — traces older than 30 days cleaned up automatically, no maintenance needed
Built-in Tools (Defaults)
- Web search — Perplexity Sonar (primary) + DuckDuckGo (fallback, no API key needed)
- Perplexity deep search — sonar-pro for research-heavy queries
- Audio transcription — local whisper.cpp, no external APIs
- YouTube analysis — metadata, subtitles, frame extraction, audio transcription
- File converter — Markdown→PDF, DOCX→TXT, XLSX→CSV, HEIC→JPG
- Large file sender — local fileshare (auto-detect) with 0x0.st fallback
- Quick notes — structured idea capture template
UX
- Mobile-friendly tables — Markdown tables are auto-converted to grouped lists for Telegram readability
- Emoji status reactions — track agent progress on your original message
- Configurable streaming — three combined modes (see table below)
- Technical footer — optional model, tokens, cost, duration display
- Inline buttons — quick-reply buttons in Telegram messages
Streaming & Reaction Modes
| Mode | Config | Reactions | Text delivery |
|---|---|---|---|
| Quiet | streaming.enabled: false, scene.reaction_style: "seen" |
👀 → 👌 | Single message after completion |
| Full streaming | streaming.enabled: true, scene.reaction_style: "detailed" |
👀 → 🤔 → ✍️ → 💯 → 👌 | Real-time, dynamically updated |
| Buffered | streaming.enabled: true, streaming.buffered: true, scene.reaction_style: "detailed" |
👀 → 🤔 → ✍️ → 💯 → 👌 | Single message after completion |
Reaction emoji meaning:
- 👀 — message received, processing started
- 🤔 — model is thinking
- ✍️ — executing a tool (bash, file read, etc.)
- 💯 — context compacting (long conversation optimization)
- 👌 — response complete
Buffered mode is the recommended choice when you want to see what the agent is doing (via reactions) but prefer clean, non-flickering text delivery. Internally, the agent streams events for reaction updates, but text is collected in a buffer and sent as a single message at the end.
Set scene.reaction_style: "off" to disable all reactions.
Maintenance (Built-in)
- Auto file cleanup — daily removal of old media files, output, tasks, and cron results (configurable retention)
- Memory maintenance — automatic deduplication, module size enforcement, orphan session cleanup, one-shot cron removal
- Default crons — monthly memory review (LLM-based quality check) and daily security audit are installed as crons since they require LLM intelligence
Memory System
- Modular structure — separate files per topic (user, decisions, infrastructure, tools, crons)
- Always Load modules injected at session start (user profile, key decisions)
- On Demand modules loaded when relevant (infrastructure, tool configs)
- Auto-reflection — periodic memory review and cleanup
Quick Start
pip install -e .
sygen
On first run, Sygen creates a workspace at ~/.sygen/ with default tools, memory templates, and config.
Configuration
All settings in ~/.sygen/config/config.json. Key sections:
| Section | What it controls |
|---|---|
model |
AI provider and model name |
streaming |
Real-time output (enabled, buffered, min/max chars, idle timeout) |
scene |
Emoji reactions (reaction_style: off/seen/detailed), technical footer |
cleanup |
Auto file cleanup (enabled, retention days per category) |
memory |
Memory maintenance (enabled, module line limit, session max age, check hour) |
timeouts |
Response timeouts per mode |
media |
Image quality, audio transcription |
mcp |
MCP servers (enabled, server list) |
skill_marketplace |
ClawHub integration (enabled, VirusTotal API key) |
Architecture
User (Telegram/Matrix)
↓
Sygen Bot (Python, aiogram/matrix-nio)
↓
Orchestrator → CLI Service → AI Provider (Claude/Codex/Gemini)
↓ ↓
Sessions Background Tasks (autonomous agents)
↓ ↓
Memory Inter-Agent Bus (sync/async messaging)
↓
Cron / Webhooks / Tools
MCP Setup
Sygen includes a native MCP client. To connect MCP servers, add to config.json:
{
"mcp": {
"enabled": true,
"servers": [
{
"name": "github",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx" }
},
{
"name": "filesystem",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/you/projects"]
}
]
}
}
Per-agent MCP servers can be configured in agents.json under the mcp field.
Commands:
/mcp list— show connected servers and their tools/mcp status— health check of each server/mcp refresh— re-discover tools from all servers
Server options:
| Field | Default | Description |
|---|---|---|
name |
required | Unique server identifier |
command |
required | Executable (npx, python3, etc.) |
args |
[] |
Command arguments |
env |
{} |
Environment variables |
transport |
"stdio" |
"stdio" for local, "sse" for remote |
url |
"" |
Server URL (SSE transport only) |
enabled |
true |
Enable/disable without removing |
auto_restart |
true |
Restart on crash |
MCP config supports hot-reload — changes to config.json are picked up without restarting the bot.
Skill Marketplace Setup
Search and install community skills from ClawHub with built-in security scanning.
{
"skill_marketplace": {
"enabled": true,
"virustotal_api_key": "your-vt-api-key"
}
}
VirusTotal API key is optional (free at virustotal.com). Without it, only static analysis runs.
Commands:
/skill search <query>— search ClawHub for skills/skill install <name>— download, scan, show report, confirm install/skill list— list installed skills/skill remove <name>— remove a skill
Install flow:
- Skill is downloaded to a temp directory
- Static analysis scans all scripts for suspicious patterns (eval, exec, network calls, sensitive paths)
- VirusTotal checks file hashes against 70+ antivirus engines
- Security report is shown with clear status indicators
- User confirms or cancels — nothing is installed without approval
Provider-Neutral Design
Sygen does not hardcode any AI provider or model in defaults. All crons, tools, and templates use null for provider/model fields — the user's configured backend is used automatically. Switching from Claude to Gemini requires only a config change, no code edits.
Updates
pip install --upgrade sygen
Contributing
See CONTRIBUTING.md. By opening a PR you agree to the CLA.
License
BSL 1.1 — free for personal use and small teams (<5 people). Converts to MIT on 2030-03-27.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sygen-1.0.28.tar.gz.
File metadata
- Download URL: sygen-1.0.28.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6deb7f21ca46ea6a9b649b02a4e46a844522544b9ed51f59bd508ae65171831
|
|
| MD5 |
c42874ff61a1c262d89a16a130ae8b31
|
|
| BLAKE2b-256 |
2a64a6491531bd2c18f551515577823e679668780b798e0504ac1634899814b5
|
File details
Details for the file sygen-1.0.28-py3-none-any.whl.
File metadata
- Download URL: sygen-1.0.28-py3-none-any.whl
- Upload date:
- Size: 1.3 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0c7c14c0cfd92a7bf6de72bb1ce5eccf4fb2b0347bfa16f8215c1f976f1b0714
|
|
| MD5 |
a4d10056d6c934a7fc089bfc78a0ca40
|
|
| BLAKE2b-256 |
89be32ef4ee48ad304c84873347132239a6d3e9d847dafd115cfc53c3be42f16
|