Decoupled control plane for AI agents
Project description
cognis
Decoupled control plane for AI agents. Cognis is the controller and orchestration layer of the Openclaw ecosystem -- it manages agent definitions, interactive chat, delegated sub-sessions, tool execution routing, and integrates with external memory and guardrails services.
Non-blocking. The main chat is always responsive. Heavy work -- research, coding, multi-step tool calls -- is delegated to background sub-sessions. The user sees real-time progress and can continue chatting.
Decoupled. Cognis does not embed memory, guardrails, or session recording. It orchestrates them through pluggable provider interfaces. Swap any component without changing the controller.
Safe by default. Every tool call flows through guardrails evaluation. Non-bypassable tools always require safety checks. All actions are audited with full lineage.
Self-hosted. Python async controller, SQLite or PostgreSQL, no external dependencies beyond an LLM API key and the companion services. Your agents, conversations, and data stay under your control.
Part of the Openclaw ecosystem: Cognis controller, Intaris guardrails, Mnemory memory.
Features
- Interactive chat with streaming -- WebSocket-based chat with real-time token streaming, tool call indicators, and delegation status cards.
- Agent identity -- Create agents with name, personality, behavioral rules, and skills. Personality bootstrapped to Mnemory and evolves through interactions.
- Sub-session delegation -- Three modes: Agent (delegate to different agent), Worker (same agent, focused task), Fork (parallel exploration). Main chat stays responsive.
- Task queue + workflows -- Durable kanban-style tasks with priorities, dependencies, portable workflow templates, step evaluation, and human-in-the-loop gates.
- Controller-executor separation -- The controller decides; executors do. Ships with in-process, subprocess, and remote WebSocket executors using JSON-RPC 2.0 over WebSocket. Remote executors can provide local LLM inference alongside tool execution, and executor-hosted channel adapters are already supported for integrations that need user-local services such as Signal via
signal-cli. - Memory integration -- Persistent recall and remember through Mnemory. Agent identity, user facts, episodic memory, and artifacts.
- Guardrails integration -- Every tool call evaluated by Intaris. Escalation prompts with approve/deny. Session recording and behavioral analysis.
- LLM provider abstraction -- Multi-provider support via LiteLLM. Configure providers and model routing through the UI, with model metadata, capability flags, and pricing fields.
- MCP tool support -- Connect MCP servers over supported transports such as stdio, SSE, and streamable HTTP. Tools are discovered automatically, evaluated through guardrails, and executed on the executor.
- Decision Engine -- Deterministic rules + lightweight LLM classifier decide whether a request runs inline or gets delegated to a background sub-session.
- Context management -- Parallel context assembly (Mnemory recall + Intaris events + intention read via
asyncio.gather). LLM-based compaction with mechanical fallback for long conversations. - Web UI -- SvelteKit application served by Cognis on
:8080by default, with setup flow, diagnostics, provider presets, and account management. - Channel adapters -- Connect agents to Signal, WhatsApp, Telegram, Discord, Slack, Matrix, IRC, Google Chat, and iMessage (via BlueBubbles) with DB-managed channel accounts and webhook/gateway integrations. Signal and BlueBubbles currently have the most complete setup documentation.
- Secure pairing flow -- External senders can be required to redeem a short-lived verification code in the Cognis UI before the agent accepts their messages.
- Polished workspace UX -- Global toasts, confirmation dialogs, keyboard shortcuts, mobile navigation, chat timestamps, and unsaved-change protection.
- Degraded-mode guidance -- Provider outage banners, setup-incomplete states, retry affordances, and contextual chat/task failure messaging.
- CLI -- Typer-based CLI for server management and administration.
- Quick local bootstrap --
uvx cognis-controllercreates local keys and a SQLite database, then serves the web UI on:8080. - JWT service auth -- Cognis issues ES256 JWTs. Mnemory and Intaris validate them. No API keys between services.
- Encrypted secrets -- AES-256-GCM encrypted secret store for API keys and credentials. Injected into executors at runtime.
Quick Start
Prerequisites
- Python 3.12+
- One LLM option: OpenAI, Anthropic, or a local Ollama instance
Cognis needs Mnemory and Intaris running. Start Cognis once first so it can generate its JWT keypair and setup URL:
uvx cognis-controller # Controller on :8080
Then start Mnemory and Intaris with Cognis's public key for JWT validation:
# Mnemory
MNEMORY_JWT_PUBLIC_KEY=~/.cognis/keys/public.pem uvx mnemory
# Intaris
INTARIS_JWT_PUBLIC_KEY=~/.cognis/keys/public.pem uvx intaris
If you started Cognis before setting provider credentials, restart it with an LLM credential available to LiteLLM:
OPENAI_API_KEY=sk-... uvx cognis-controller
On first start, Cognis creates ~/.cognis/ with auto-generated JWT keys, a secrets encryption key, and a SQLite database. When bundled UI assets are present, it serves the web UI on :8080 and prints a one-time setup URL for the first admin account:
Cognis started on http://localhost:8080
No users found. Complete setup at:
http://localhost:8080/setup?token=<random_token>
This link expires in 15 minutes.
After creating the admin:
- Open the printed setup URL
- Create the first admin account in the web form
- Log in
- Open Settings → Providers and configure a provider preset
- Open Settings → Executors and enable the tool groups you want available
- Open Agents → New and create the first agent
- Start a conversation from Chat
- Optional: configure Channels and redeem pairing codes to link remote sender identities securely
Use Settings → System or Getting started for readiness checks and diagnostics.
The bundled UI also includes embedded user-facing documentation under Docs.
For headless setup, use the CLI:
cognis-controller admin create-user admin@example.com --name "Admin"
Architecture
Cognis is a decoupled control plane. It orchestrates, but does not own, memory or guardrails:
| Data | Owner | Storage |
|---|---|---|
| Users, agents, secrets, settings | Cognis | Cognis DB (SQLite / PostgreSQL) |
| Conversation & session metadata | Cognis | Cognis DB |
| Session content (messages, tool calls) | Intaris | Intaris event store |
| Safety decisions, behavioral analysis | Intaris | Intaris DB |
| Persistent memory (facts, personality) | Mnemory | Mnemory (Qdrant) |
Every major capability is a pluggable provider behind a Python Protocol interface:
MemoryProvider-- default: MnemoryGuardrailsProvider-- default: IntarisExecutorProvider-- ships with in-process, subprocess, and remote WebSocket modesLLMProvider-- default: LiteLLMSecretsProvider-- default: encrypted DBAuthProvider-- default: ES256 JWT
Configuration
There is no configuration file. Infrastructure config uses environment variables. Application config (LLM providers, model routing, session settings) is stored in the database and managed through the UI or API.
Environment Variables
| Variable | Default | Description |
|---|---|---|
COGNIS_DATA_DIR |
~/.cognis |
Data directory (keys, DB, secrets) |
COGNIS_HOST |
0.0.0.0 |
Bind address |
COGNIS_PORT |
8080 |
Port |
COGNIS_MNEMORY_URL |
http://localhost:8050 |
Mnemory service URL |
COGNIS_INTARIS_URL |
http://localhost:8060 |
Intaris service URL |
DATABASE_URL |
sqlite+aiosqlite:///~/.cognis/cognis.db |
Database URL |
COGNIS_LOG_LEVEL |
info |
Log level |
Auto-generated on first start (override with env vars for production):
COGNIS_JWT_PRIVATE_KEY_PATH-- ES256 private keyCOGNIS_JWT_PUBLIC_KEY_PATH-- ES256 public key (share with Mnemory/Intaris)COGNIS_SECRETS_KEY_PATH-- AES-256-GCM encryption key
Development
# Install with dev dependencies
uv pip install -e ".[dev]"
# Run server
uv run cognis-controller serve
# Run the SvelteKit UI in dev mode (not required for normal users)
cd ui && npm install && npm run dev
# Run tests
uv run pytest tests/unit/ -v # Unit tests (fast, no services needed)
uv run pytest tests/contract/ -v # Contract tests (need Mnemory + Intaris)
uv run pytest tests/integration/ -v # Integration tests (need full stack)
# UI checks and build
cd ui && npm run check
cd ui && npm run test
cd ui && npm run build
# Lint and type check
ruff check cognis/ tests/
ruff format cognis/ tests/
mypy cognis/
CLI
cognis-controller serve # Start the controller
cognis-controller admin create-user <email>
# Create user (direct DB access)
cognis-controller admin reset-password <email>
# Reset password
cognis-controller admin api-key create <email>
# Create API key
cognis-controller status # Health + provider status
cognis-controller config init # Print env var template
Remote Executor
Run a standalone executor process that connects to a Cognis controller via WebSocket. The executor is a remote hand: the controller assigns tools, MCP setup, and decides whether LLM inference runs locally on the controller or is proxied through the executor.
# On the remote machine (via CLI flags)
cognis-executor \
--controller-url wss://cognis.example.com/api/executor/ws \
--token <jwt-token>
# Or via environment variables (preferred — avoids token in /proc/cmdline)
export COGNIS_CONTROLLER_URL=wss://cognis.example.com/api/executor/ws
export COGNIS_EXECUTOR_TOKEN=<jwt-token>
cognis-executor
For local development from a checkout, uv run cognis-executor and python -m cognis.executor are also available.
Or run as a Python module:
python -m cognis.executor \
--controller-url wss://cognis.example.com/api/executor/ws \
--token <jwt-token>
The executor authenticates with a JWT token generated by Cognis, communicates over encrypted WebSocket with per-message compression, and sends heartbeats every 15 seconds. TLS (wss://) is enforced for non-localhost connections. LLM providers remain configured normally in Cognis; setting a provider location to executor routes the same provider call through a matching executor instead of running it on the controller.
Executors are user-scoped. MCP servers are also user-scoped and are assigned to executors, not shared globally across users. Agents bind to one executor (explicitly or by labels) and inherit the effective tool set from that executor.
For multi-user production deployments, disable local executor modes with the DB-backed settings executors.allow_in_process=false and executors.allow_subprocess=false, then use only WebSocket executors.
Generating a token: Create the executor in Settings > Executors, then click Generate token. The token is displayed once — copy it or the ready-made CLI command. Alternatively, use the API: POST /api/v1/executors/{id}/token (admin only).
Subprocess mode: When using python -m cognis.executor, the token can also be piped via stdin (used internally by the subprocess executor to avoid exposing the token in process listings).
Systemd service templates for both the controller and executor are available in deploy/systemd/. See deploy/systemd/README.md for installation instructions covering system-level units (per-user executor template) and user-level units (no root required).
The same split is the deployment model for stateful channel adapters.
For example, a user can either run Signal's signal-cli REST API next to a
Cognis executor they control or let the executor run signal-cli directly via
JSON-RPC, while the cloud controller continues to orchestrate pairing, turns,
and outbound delivery without owning the Signal session state itself.
Status
Available today:
- Interactive chat, agents, tasks, workflows, schedules, channels, and the bundled web UI
- In-process, subprocess, and remote WebSocket executors
- Executor-routed inference and executor-hosted Signal direct mode
- Mnemory and Intaris integrations, MCP tools, encrypted secrets, setup diagnostics, and admin CLI flows
Still ahead:
- Docker and Kubernetes executor backends
- federation and cryptographic agent identity
- broader production hardening for multi-user and multi-replica deployments
See docs/specs/ for the full specification set and docs/specs/implementation/ for the implementation stage tracker.
Documentation
- Documentation Index
- Getting Started
- Architecture
- Configuring Providers
- Creating Agents
- Settings
- Using Chat
- Managing Tasks
- Schedules
- Workflows
- Channels
- Executors
- Tools and Skills
- Troubleshooting
License
Business Source License 1.1, same licensing model as Intaris.
- Free for your own internal business operations, including internal deployment
- Modifications and redistribution allowed when not used commercially
- Converts to Apache License 2.0 on 2030-03-15
See LICENSE for the full terms.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cognis_controller-0.1.0.tar.gz.
File metadata
- Download URL: cognis_controller-0.1.0.tar.gz
- Upload date:
- Size: 431.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b7f9519f3858d31c6a9b8a78b271a7f6bf95c2520c338061e9ba366b3705b24
|
|
| MD5 |
eaafb486c6cbb4b8d1c03619f051ced4
|
|
| BLAKE2b-256 |
dc34037e56474bc5c8a3a790083b923f1134604aecf4017d26c4af16f24c2a5f
|
Provenance
The following attestation bundles were made for cognis_controller-0.1.0.tar.gz:
Publisher:
python-publish.yml on fpytloun/cognis
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognis_controller-0.1.0.tar.gz -
Subject digest:
1b7f9519f3858d31c6a9b8a78b271a7f6bf95c2520c338061e9ba366b3705b24 - Sigstore transparency entry: 1273545987
- Sigstore integration time:
-
Permalink:
fpytloun/cognis@15fd43a25a4b5f9652a45d1eeb6f2d2f1813a1fe -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/fpytloun
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@15fd43a25a4b5f9652a45d1eeb6f2d2f1813a1fe -
Trigger Event:
release
-
Statement type:
File details
Details for the file cognis_controller-0.1.0-py3-none-any.whl.
File metadata
- Download URL: cognis_controller-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4366d477949fdf5dab2e92379059bacc37eb60a3522977be8fa6c11248aa47af
|
|
| MD5 |
9cebfcba0caec668f540cb204860d177
|
|
| BLAKE2b-256 |
5eeb2c2bf44ea970b30a093b378fb61a12ad1c251c3a81c9d98f662cac3edeed
|
Provenance
The following attestation bundles were made for cognis_controller-0.1.0-py3-none-any.whl:
Publisher:
python-publish.yml on fpytloun/cognis
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognis_controller-0.1.0-py3-none-any.whl -
Subject digest:
4366d477949fdf5dab2e92379059bacc37eb60a3522977be8fa6c11248aa47af - Sigstore transparency entry: 1273546097
- Sigstore integration time:
-
Permalink:
fpytloun/cognis@15fd43a25a4b5f9652a45d1eeb6f2d2f1813a1fe -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/fpytloun
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@15fd43a25a4b5f9652a45d1eeb6f2d2f1813a1fe -
Trigger Event:
release
-
Statement type: