Skip to main content

Decoupled control plane for AI agents

Project description

cognis banner

cognis

Decoupled control plane for AI agents. Cognis is the controller and orchestration layer of the Openclaw ecosystem -- it manages agent definitions, interactive chat, delegated sub-sessions, tool execution routing, and integrates with external memory and guardrails services.

Non-blocking. The main chat is always responsive. Heavy work -- research, coding, multi-step tool calls -- is delegated to background sub-sessions. The user sees real-time progress and can continue chatting.

Decoupled. Cognis does not embed memory, guardrails, or session recording. It orchestrates them through pluggable provider interfaces. Swap any component without changing the controller.

Safe by default. Every tool call flows through guardrails evaluation. Non-bypassable tools always require safety checks. All actions are audited with full lineage.

Self-hosted. Python async controller, SQLite or PostgreSQL, no external dependencies beyond an LLM API key and the companion services. Your agents, conversations, and data stay under your control.

Part of the Openclaw ecosystem: Cognis controller, Intaris guardrails, Mnemory memory.

Features

  • Interactive chat with streaming -- WebSocket-based chat with real-time token streaming, tool call indicators, and delegation status cards.
  • Agent identity -- Create agents with name, personality, behavioral rules, and skills. Personality bootstrapped to Mnemory and evolves through interactions.
  • Sub-session delegation -- Three modes: Agent (delegate to different agent), Worker (same agent, focused task), Fork (parallel exploration). Main chat stays responsive.
  • Task queue + workflows -- Durable kanban-style tasks with priorities, dependencies, portable workflow templates, per-step tool profiles, step evaluation, and human-in-the-loop gates.
  • Controller-executor separation -- The controller decides; executors do. Ships with in-process, subprocess, and remote WebSocket executors using JSON-RPC 2.0 over WebSocket. Remote executors can provide local LLM inference alongside tool execution, and executor-hosted channel adapters are already supported for integrations that need user-local services such as Signal via signal-cli.
  • Memory integration -- Persistent recall and remember through Mnemory. Agent identity, user facts, episodic memory, and artifacts.
  • Guardrails integration -- Every tool call evaluated by Intaris. Escalation prompts with approve/deny. Session recording and behavioral analysis.
  • LLM provider abstraction -- Multi-provider support via LiteLLM. Configure providers and model routing through the UI, with model metadata, capability flags, and pricing fields.
  • MCP tool support -- Connect MCP servers over supported transports such as stdio, SSE, and streamable HTTP. Tools are discovered automatically, evaluated through guardrails, and executed on the executor.
  • Decision Engine -- Deterministic rules + lightweight LLM classifier decide whether a request runs inline or gets delegated to a background sub-session.
  • Context management -- Parallel context assembly (Mnemory recall + Intaris events + intention read via asyncio.gather). LLM-based compaction with mechanical fallback for long conversations.
  • Web UI -- SvelteKit application served by Cognis on :8080 by default, with setup flow, diagnostics, provider presets, and account management.
  • Installable PWA -- Cognis ships as a Progressive Web App. Install on desktop, iPhone, or Android for a dedicated window, offline app shell, safe-area-aware layout, and native-feel mobile navigation with a bottom tab bar and bottom-sheet drawers.
  • Channel adapters -- Connect agents to Signal, WhatsApp, Telegram, Discord, Slack, Matrix, IRC, Google Chat, and iMessage (via BlueBubbles) with DB-managed channel accounts and webhook/gateway integrations. Signal and BlueBubbles currently have the most complete setup documentation.
  • Secure pairing flow -- External senders can be required to redeem a short-lived verification code in the Cognis UI before the agent accepts their messages.
  • Polished workspace UX -- Global toasts, confirmation dialogs, keyboard shortcuts, mobile navigation, chat timestamps, and unsaved-change protection.
  • Degraded-mode guidance -- Provider outage banners, setup-incomplete states, retry affordances, and contextual chat/task failure messaging.
  • CLI -- Typer-based CLI for server management and administration.
  • Quick local bootstrap -- uvx cognis-controller creates local keys and a SQLite database, then serves the web UI on :8080.
  • JWT service auth -- Cognis issues ES256 JWTs. Mnemory and Intaris validate them. No API keys between services.
  • Encrypted secrets -- AES-256-GCM encrypted secret store for API keys and credentials. Injected into executors at runtime.

Quick Start

Prerequisites

  • Python 3.12+
  • One LLM option: OpenAI, Anthropic, or a local Ollama instance

Cognis needs Mnemory and Intaris running. Start Cognis once first so it can generate its JWT keypair and setup URL:

uvx cognis-controller           # Controller on :8080

Then start Mnemory and Intaris with Cognis's public key for JWT validation:

# Mnemory
MNEMORY_JWT_PUBLIC_KEY=~/.cognis/keys/public.pem uvx mnemory

# Intaris
INTARIS_JWT_PUBLIC_KEY=~/.cognis/keys/public.pem uvx intaris

If you started Cognis before setting provider credentials, restart it with an LLM credential available to LiteLLM:

OPENAI_API_KEY=sk-... uvx cognis-controller

On first start, Cognis creates ~/.cognis/ with auto-generated JWT keys, a secrets encryption key, and a SQLite database. When bundled UI assets are present, it serves the web UI on :8080 and prints a one-time setup URL for the first admin account:

Cognis started on http://localhost:8080

No users found. Complete setup at:
  http://localhost:8080/setup?token=<random_token>
This link expires in 15 minutes.

After creating the admin:

  1. Open the printed setup URL
  2. Create the first admin account in the web form
  3. Log in
  4. Open Settings → Providers and configure a provider preset
  5. Open Settings → Executors and enable the tool groups you want available
  6. Open Agents → New and create the first agent
  7. Start a conversation from Chat
  8. Optional: configure Channels and redeem pairing codes to link remote sender identities securely

Use Settings → System or Getting started for readiness checks and diagnostics.

The bundled UI also includes embedded user-facing documentation under Docs.

For headless setup, use the CLI:

cognis-controller admin create-user admin@example.com --name "Admin"

Architecture

Cognis is a decoupled control plane. It orchestrates, but does not own, memory or guardrails:

Cognis ecosystem overview

Data Owner Storage
Users, agents, secrets, settings Cognis Cognis DB (SQLite / PostgreSQL)
Conversation & session metadata Cognis Cognis DB
Session content (messages, tool calls) Intaris Intaris event store
Safety decisions, behavioral analysis Intaris Intaris DB
Persistent memory (facts, personality) Mnemory Mnemory (Qdrant)

Every major capability is a pluggable provider behind a Python Protocol interface:

  • MemoryProvider -- default: Mnemory
  • GuardrailsProvider -- default: Intaris
  • ExecutorProvider -- ships with in-process, subprocess, and remote WebSocket modes
  • LLMProvider -- default: LiteLLM
  • SecretsProvider -- default: encrypted DB
  • AuthProvider -- default: ES256 JWT

Configuration

There is no configuration file. Infrastructure config uses environment variables. Application config (LLM providers, model routing, session settings) is stored in the database and managed through the UI or API.

Environment Variables

Variable Default Description
COGNIS_DATA_DIR ~/.cognis Data directory (keys, DB, secrets)
COGNIS_HOST 0.0.0.0 Bind address
COGNIS_PORT 8080 Port
COGNIS_MNEMORY_URL http://localhost:8050 Mnemory service URL
COGNIS_INTARIS_URL http://localhost:8060 Intaris service URL
DATABASE_URL sqlite+aiosqlite:///~/.cognis/cognis.db Database URL
COGNIS_LOG_LEVEL info Log level

Auto-generated on first start (override with env vars for production):

  • COGNIS_JWT_PRIVATE_KEY_PATH -- ES256 private key
  • COGNIS_JWT_PUBLIC_KEY_PATH -- ES256 public key (share with Mnemory/Intaris)
  • COGNIS_SECRETS_KEY_PATH -- AES-256-GCM encryption key

Development

# Install with dev dependencies
uv pip install -e ".[dev]"

# Run server
uv run cognis-controller serve

# Run the SvelteKit UI in dev mode (not required for normal users)
cd ui && npm install && npm run dev

# Run tests
uv run pytest tests/unit/ -v          # Unit tests (fast, no services needed)
uv run pytest tests/contract/ -v      # Contract tests (need Mnemory + Intaris)
uv run pytest tests/integration/ -v   # Integration tests (need full stack)

# UI checks and build
cd ui && npm run check
cd ui && npm run test
cd ui && npm run build

# Lint and type check
ruff check cognis/ tests/
ruff format cognis/ tests/
mypy cognis/

CLI

cognis-controller serve                 # Start the controller
cognis-controller admin create-user <email>
                                         # Create user (direct DB access)
cognis-controller admin reset-password <email>
                                         # Reset password
cognis-controller admin api-key create <email>
                                         # Create API key
cognis-controller status                # Health + provider status
cognis-controller config init           # Print env var template

Remote Executor

Run a standalone executor process that connects to a Cognis controller via WebSocket. The executor is a remote hand: the controller assigns tools, MCP setup, and decides whether LLM inference runs locally on the controller or is proxied through the executor.

# On the remote machine (via CLI flags)
cognis-executor \
    --controller-url wss://cognis.example.com/api/executor/ws \
    --token <jwt-token>

# Or via environment variables (preferred — avoids token in /proc/cmdline)
export COGNIS_CONTROLLER_URL=wss://cognis.example.com/api/executor/ws
export COGNIS_EXECUTOR_TOKEN=<jwt-token>
cognis-executor

For local development from a checkout, uv run cognis-executor and python -m cognis.executor are also available.

Or run as a Python module:

python -m cognis.executor \
    --controller-url wss://cognis.example.com/api/executor/ws \
    --token <jwt-token>

The executor authenticates with a JWT token generated by Cognis, communicates over encrypted WebSocket with per-message compression, and sends heartbeats every 15 seconds. TLS (wss://) is enforced for non-localhost connections. LLM providers remain configured normally in Cognis; setting a provider location to executor routes the same provider call through a matching executor instead of running it on the controller.

Executors are user-scoped. MCP servers are also user-scoped and are assigned to executors, not shared globally across users. Agents bind to one executor (explicitly or by labels) and inherit the effective tool set from that executor.

For multi-user production deployments, disable local executor modes with the DB-backed settings executors.allow_in_process=false and executors.allow_subprocess=false, then use only WebSocket executors.

Generating a token: Create the executor in Settings > Executors, then click Generate token. The token is displayed once — copy it or the ready-made CLI command. Alternatively, use the API: POST /api/v1/executors/{id}/token (admin only).

Subprocess mode: When using python -m cognis.executor, the token can also be piped via stdin (used internally by the subprocess executor to avoid exposing the token in process listings).

Systemd service templates for both the controller and executor are available in deploy/systemd/. See deploy/systemd/README.md for installation instructions covering system-level units (per-user executor template) and user-level units (no root required).

The same split is the deployment model for stateful channel adapters. For example, a user can either run Signal's signal-cli REST API next to a Cognis executor they control or let the executor run signal-cli directly via JSON-RPC, while the cloud controller continues to orchestrate pairing, turns, and outbound delivery without owning the Signal session state itself.

Status

Available today:

  • Interactive chat, agents, tasks, workflows, schedules, channels, and the bundled web UI
  • In-process, subprocess, and remote WebSocket executors
  • Executor-routed inference and executor-hosted Signal direct mode
  • Mnemory and Intaris integrations, MCP tools, encrypted secrets, setup diagnostics, and admin CLI flows

Still ahead:

  • Docker and Kubernetes executor backends
  • federation and cryptographic agent identity
  • broader production hardening for multi-user and multi-replica deployments

See docs/specs/ for the full specification set and docs/specs/implementation/ for the implementation stage tracker.

Documentation

License

Business Source License 1.1, same licensing model as Intaris.

  • Free for your own internal business operations, including internal deployment
  • Modifications and redistribution allowed when not used commercially
  • Converts to Apache License 2.0 on 2030-03-15

See LICENSE for the full terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cognis_controller-0.3.0.tar.gz (3.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cognis_controller-0.3.0-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file cognis_controller-0.3.0.tar.gz.

File metadata

  • Download URL: cognis_controller-0.3.0.tar.gz
  • Upload date:
  • Size: 3.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cognis_controller-0.3.0.tar.gz
Algorithm Hash digest
SHA256 3e41c779b20abf44c6d8806f55ffc0cf5ae398c2af22be2c4e3f6f49cc3ebf72
MD5 59a165ef5cab9c09bf32f78b8b29365f
BLAKE2b-256 b3815b01b7c61c54aa65ea03064ffd4ce0083524db98b003c12038b69414b19b

See more details on using hashes here.

Provenance

The following attestation bundles were made for cognis_controller-0.3.0.tar.gz:

Publisher: python-publish.yml on fpytloun/cognis

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cognis_controller-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for cognis_controller-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 35b019139de705f4ae2d167f4a5a8da9140cae9b492b9f550ebd264f348314b8
MD5 2bf884d63a1607e1b5552b6dd5c15a42
BLAKE2b-256 444b139c53e466a856db6b3ea68550580480a0f84b0da087f943f49359014d96

See more details on using hashes here.

Provenance

The following attestation bundles were made for cognis_controller-0.3.0-py3-none-any.whl:

Publisher: python-publish.yml on fpytloun/cognis

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page