Skip to main content

AI assistant framework with multi-agent orchestration, background tasks, and persistent memory.

Project description

Sygen

License: BSL 1.1 Python 3.11+ Version 1.1.9 Support on Ko-fi

Self-hosted AI assistant framework with multi-agent orchestration, background tasks, and persistent memory.

Why Sygen?

Most AI chatbot frameworks give you a single bot that answers questions. Sygen gives you an autonomous agent system that runs on your own hardware, coordinates multiple AI backends (Claude, Codex, Gemini), and manages long-running workflows without babysitting. It persists memory across sessions, schedules recurring tasks, connects to 3000+ external services via MCP, and lets you spin up sub-agents that operate independently — all from a Telegram or Matrix chat.

Features

Core

  • Multi-agent system — supervisor + sub-agents, each with own bot and workspace
  • Background task delegation — offload long work to autonomous agents, keep chatting, get results back
  • Persistent memory — modular memory system with Always Load / On Demand separation
  • Named sessions — multiple isolated conversation contexts per chat
  • Inter-agent communication — sync and async messaging between agents with shared knowledge base

Transports & Providers

  • Telegram (primary) + Matrix support
  • Claude Code, Codex CLI, Gemini CLI — pluggable AI backends
  • Streaming output — real-time response delivery with configurable buffering

MCP (Model Context Protocol)

  • Native MCP client — connects to any MCP server, discovers tools, routes calls
  • 3000+ integrations — GitHub, Google Drive, Slack, Docker, databases, and more
  • Auto-lifecycle — starts servers on boot, health checks every 30s, auto-restart on crash
  • Hot-reload — add/remove servers without restarting the bot
  • /mcp command — list servers, check status, refresh tools from Telegram

Skill Marketplace (ClawHub)

  • 13,000+ community skills — search and install from OpenClaw's ClawHub registry
  • Security scanning — static analysis (20 suspicious patterns) + VirusTotal API before every install
  • User always decides — full security report shown, install only on explicit confirmation
  • Zero dependencies — no npm/OpenClaw required, pure HTTP API integration
  • /skill command — search, install, list, remove from Telegram

Automation

  • Cron scheduler — recurring tasks with timezone support, plus script_mode for direct script execution without LLM
  • Webhook server — HTTP endpoints that trigger agent actions
  • Docker sandbox — optional secure execution for untrusted code
  • Silent output[SILENT] marker lets cron/webhook tasks suppress delivery when nothing to report

Observability

  • Execution traces — every cron, task, and webhook run is logged to SQLite (traces.db)
  • /logs command — view recent traces, filter by type (/logs cron), errors (/logs errors), or name
  • Auto-rotation — traces older than 30 days cleaned up automatically, no maintenance needed

Built-in Tools (Defaults)

  • Web search — Perplexity Sonar (primary) + DuckDuckGo (fallback, no API key needed)
  • Perplexity deep search — sonar-pro for research-heavy queries
  • Audio transcription — local whisper.cpp, no external APIs
  • YouTube analysis — metadata, subtitles, frame extraction, audio transcription
  • File converter — Markdown→PDF, DOCX→TXT, XLSX→CSV, HEIC→JPG
  • Large file sender — local fileshare (auto-detect) with 0x0.st fallback
  • Quick notes — structured idea capture template

UX

  • Reply & quote context — reply to or quote any message, and the agent sees the cited text as context
  • Mobile-friendly tables — Markdown tables are auto-converted to grouped lists for Telegram readability
  • Emoji status reactions — track agent progress on your original message
  • Configurable streaming — three combined modes (see table below)
  • Technical footer — optional model, tokens, cost, duration display
  • Inline buttons — quick-reply buttons in Telegram messages

Streaming & Reaction Modes

Mode Config Reactions Text delivery
Quiet streaming.enabled: false, scene.reaction_style: "seen" 👀 → 👌 Single message after completion
Full streaming streaming.enabled: true, scene.reaction_style: "detailed" 👀 → 🤔 → ✍️ → 💯 → 👌 Real-time, dynamically updated
Buffered streaming.enabled: true, streaming.buffered: true, scene.reaction_style: "detailed" 👀 → 🤔 → ✍️ → 💯 → 👌 Single message after completion

Reaction emoji meaning:

  • 👀 — message received, processing started
  • 🤔 — model is thinking
  • ✍️ — executing a tool (bash, file read, etc.)
  • 💯 — context compacting (long conversation optimization; use /compact to trigger manually)
  • 👌 — response complete

Buffered mode is the recommended choice when you want to see what the agent is doing (via reactions) but prefer clean, non-flickering text delivery. Internally, the agent streams events for reaction updates, but text is collected in a buffer and sent as a single message at the end.

Set scene.reaction_style: "off" to disable all reactions.

Maintenance (Built-in)

  • Auto file cleanup — daily removal of old media files, output, tasks, and cron results (configurable retention)
  • Memory maintenance — automatic deduplication, module size enforcement, orphan session cleanup, one-shot cron removal
  • Default crons — monthly memory review (LLM-based quality check) and daily security audit are installed as crons since they require LLM intelligence

Memory System

  • Modular structure — separate files per topic (user, decisions, infrastructure, tools, crons)
  • Always Load modules injected at session start (user profile, key decisions)
  • On Demand modules loaded when relevant (infrastructure, tool configs)
  • Auto-reflection — periodic memory review and cleanup

Quick Start

Prerequisites

1. Install Sygen

pip install sygen

2. Create the config directory and minimal config

mkdir -p ~/.sygen/config
cat > ~/.sygen/config/config.json << 'EOF'
{
  "telegram_token": "YOUR_TELEGRAM_BOT_TOKEN",
  "model": "sonnet",
  "allowed_user_ids": [YOUR_TELEGRAM_USER_ID]
}
EOF

Replace YOUR_TELEGRAM_BOT_TOKEN and YOUR_TELEGRAM_USER_ID with your actual values. Find your user ID by messaging @userinfobot on Telegram.

3. Start Sygen

sygen

On first run, Sygen creates a workspace at ~/.sygen/ with default tools, memory templates, and config.

4. Send your first message

Open your bot in Telegram and send any message. Sygen will respond using the configured AI backend.

Usage Examples

Basic bot setup (minimal config.json)

{
  "telegram_token": "123456:ABC-DEF...",
  "model": "sonnet",
  "allowed_user_ids": [123456789],
  "streaming": {
    "enabled": true,
    "buffered": true
  },
  "scene": {
    "reaction_style": "detailed"
  }
}

Enable RAG pipeline

Add a rag section to config.json to let the agent search and reference your local documents:

{
  "rag": {
    "enabled": true,
    "chunk_size": 512,
    "top_k_final": 5,
    "reranker_model": "antoinelouis/colbert-xm"
  }
}

The RAG pipeline indexes workspace files and memory modules automatically — no external APIs or vector databases required. Uses BM25 + vector hybrid search with ColBERT v2 reranking. Supported file types: .md, .txt, .yaml, .yml (configurable via workspace_glob_patterns).

Set up a cron task

From your Telegram chat with the bot:

You:   Create a cron task that checks Hacker News top stories every morning at 8am
       and sends me a summary of the top 5.

Sygen: Created cron task "hn-morning" — runs daily at 08:00 (Europe/Berlin).
       Next run: tomorrow at 08:00.

You:   /cron list

Sygen: Active cron tasks:
       • hn-morning — daily 08:00 — Check HN top stories
       • memory-review — monthly — Memory quality review

Cron tasks run as autonomous agent sessions with full tool access. Use [SILENT] in the task description to suppress output when there is nothing to report.

Script mode — for tasks that just run a script (dashboards, reports, monitoring), use script_mode to bypass the LLM agent entirely. The script's stdout is sent directly to Telegram — no tokens consumed, 100% reliable:

{
  "id": "business-dashboard",
  "script_mode": true,
  "script": "scripts/dashboard.py",
  "schedule": "0 19 * * *"
}

Multi-agent setup

Define sub-agents in ~/.sygen/agents.json. Each agent gets its own Telegram bot, workspace, and memory:

[
  {
    "name": "researcher",
    "telegram_token": "111111:AAA-BBB...",
    "model": "sonnet",
    "allowed_user_ids": [123456789]
  },
  {
    "name": "coder",
    "telegram_token": "222222:CCC-DDD...",
    "model": "o4-mini",
    "allowed_user_ids": [123456789]
  }
]

The main agent can delegate tasks to sub-agents via sync or async messaging. Each sub-agent also works as a standalone bot that you can chat with directly.

Configuration

All settings in ~/.sygen/config/config.json. Key sections:

Section What it controls
provider / model AI backend (claude/codex/gemini) and model name (sonnet, opus, flash)
streaming Real-time output (enabled, buffered, min/max chars, idle timeout)
scene Emoji reactions (reaction_style: off/seen/detailed), technical footer
cleanup Auto file cleanup (enabled, retention days per category)
memory Memory maintenance (enabled, module line limit, session max age, check hour)
timeouts Response timeouts per mode
image / transcription Image quality settings, audio transcription (whisper model, language)
mcp MCP servers (enabled, server list)
skill_marketplace ClawHub integration (enabled, VirusTotal API key)

Architecture

User (Telegram/Matrix)
  ↓
Sygen Bot (Python, aiogram/matrix-nio)
  ↓
Orchestrator → CLI Service → AI Provider (Claude/Codex/Gemini)
  ↓                ↓
Sessions      Background Tasks (autonomous agents)
  ↓                ↓
Memory        Inter-Agent Bus (sync/async messaging)
  ↓
Cron / Webhooks / Tools

MCP Setup

Sygen includes a native MCP client. To connect MCP servers, add to config.json:

{
  "mcp": {
    "enabled": true,
    "servers": [
      {
        "name": "github",
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-github"],
        "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxx" }
      },
      {
        "name": "filesystem",
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/you/projects"]
      }
    ]
  }
}

Per-agent MCP servers can be configured in agents.json under the mcp field.

Commands:

  • /mcp list — show connected servers and their tools
  • /mcp status — health check of each server
  • /mcp refresh — re-discover tools from all servers

Server options:

Field Default Description
name required Unique server identifier
command required Executable (npx, python3, etc.)
args [] Command arguments
env {} Environment variables
transport "stdio" "stdio" for local, "sse" for remote
url "" Server URL (SSE transport only)
enabled true Enable/disable without removing
auto_restart true Restart on crash

MCP config supports hot-reload — changes to config.json are picked up without restarting the bot.

Skill Marketplace Setup

Search and install community skills from ClawHub with built-in security scanning.

{
  "skill_marketplace": {
    "enabled": true,
    "virustotal_api_key": "your-vt-api-key"
  }
}

VirusTotal API key is optional (free at virustotal.com). Without it, only static analysis runs.

Commands:

  • /skill search <query> — search ClawHub for skills
  • /skill install <name> — download, scan, show report, confirm install
  • /skill list — list installed skills
  • /skill remove <name> — remove a skill

Install flow:

  1. Skill is downloaded to a temp directory
  2. Static analysis scans all scripts for suspicious patterns (eval, exec, network calls, sensitive paths)
  3. VirusTotal checks file hashes against 70+ antivirus engines
  4. Security report is shown with clear status indicators
  5. User confirms or cancels — nothing is installed without approval

Provider-Neutral Design

Sygen does not hardcode any AI provider or model in defaults. All crons, tools, and templates use null for provider/model fields — the user's configured backend is used automatically. Switching from Claude to Gemini requires only a config change, no code edits.

Comparison

Feature Sygen MemGPT / Letta OpenClaw Typical chatbot frameworks
Self-hosted, runs on your machine Yes Yes Yes Varies
Multi-agent orchestration Built-in (supervisor + sub-agents) Single agent Single agent Usually not
Persistent memory (cross-session) Modular file-based, always load / on demand Tiered memory (core/archival/recall) Via skills Manual or none
RAG pipeline (local, no external APIs) Built-in Requires external vector DB No Requires setup
Background task delegation Autonomous agents in separate processes No No No
Cron / scheduled tasks Native, timezone-aware, with silent mode No No Rarely
Webhooks (inbound HTTP triggers) Native No No Rarely
MCP protocol (3000+ integrations) Native client with auto-lifecycle No No No
Skill marketplace (13k+ skills) ClawHub with security scanning No ClawHub (origin) No
Multiple AI backends Claude Code, Codex CLI, Gemini CLI OpenAI only Claude Code Usually one
Transport Telegram + Matrix Web UI CLI only Web / API
Streaming with reaction indicators Three configurable modes Basic streaming No Varies
Execution traces / observability SQLite-backed /logs command Basic logging No Varies

Documentation

Full documentation is available at https://alexeymorozua.github.io/sygen/.

Covers installation, configuration reference, agent setup, tool development, memory system internals, MCP integration, and skill creation.

Updates

pip install --upgrade sygen

Upgrading from pre-1.3.14? 1.3.14 hardened per-user ACLs and changed how Telegram senders resolve to a role. If the bot starts asking for approval on every file edit after the upgrade, your Telegram ID is not yet mapped to a user. See Upgrading from pre-1.3.14 for the migration checklist.

Upgrading to 1.3.29? The admin REST / WebSocket surface changed: /api/system/status dropped its counter fields (agents, tasks_active, …) — read them from the new /api/dashboard/summary.counters instead. See docs/migration-1.3.29.md and CHANGELOG.md for the full breakdown.

Contributing

See CONTRIBUTING.md. By opening a PR you agree to the CLA.

License

BSL 1.1 — free for personal use and small teams (<5 people). Converts to MIT on 2030-03-27.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sygen-1.3.32.tar.gz (665.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sygen-1.3.32-py3-none-any.whl (882.0 kB view details)

Uploaded Python 3

File details

Details for the file sygen-1.3.32.tar.gz.

File metadata

  • Download URL: sygen-1.3.32.tar.gz
  • Upload date:
  • Size: 665.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sygen-1.3.32.tar.gz
Algorithm Hash digest
SHA256 f2427649be4dadfe8068873b85ec73695329ea107aaf7fa52e9b2b0e559509ad
MD5 0b37bdf8e9e3c45c3ea862b94ff8ba5d
BLAKE2b-256 53dcf209867d42bb1ef1a267c34958f6e16b43bd57bd2ed2c7e12caf42bde87c

See more details on using hashes here.

File details

Details for the file sygen-1.3.32-py3-none-any.whl.

File metadata

  • Download URL: sygen-1.3.32-py3-none-any.whl
  • Upload date:
  • Size: 882.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sygen-1.3.32-py3-none-any.whl
Algorithm Hash digest
SHA256 410e720f4d51c4d4175b4290093d33412c1c26c4616cb0c280ebf399d35a2ada
MD5 fdad124656ac4a7dfbbfa781bcdfe03f
BLAKE2b-256 698aa7f529fd7a040e5bffe81d115475da98de22821e84301d8538cb2c1ab224

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page