Skip to main content

Open-source AI agent engine. Modular. No limits.

Project description

Lumen

Lumen

Open-source AI agent engine. Modular. No limits.

"An agent you can shape without code."

QuickstartDockerDokployArchitectureManifestoFull ManifestoSpecChangelogContributing


What is Lumen?

Lumen is a downloadable AI agent framework that works from minute zero. Install it, run it, and you have a working assistant. From there, shape it however you want: pick a personality, install modules, plug in connectors, swap providers. No code required for everyday use.

Not a SaaS. Not a platform. Not a chatbot. A framework you own and run on your machine.

Think WordPress, but for AI agents.

  • Install it → working assistant
  • Pick a personality → different behavior, same core
  • Install a module → new capability or integration
  • Bring your own module → load any custom module.yaml

Quickstart

pip install enlumen
lumen run

Your browser opens at:

http://localhost:3000

First time? The setup wizard walks you through three paths:

  1. Quick start — default personality + free OpenRouter model.
  2. Choose a personality — browse the catalog and pick one that matches your use case.
  3. Bring your own module — upload a custom module.yaml to configure Lumen your way.

After that, Lumen awakens and you land directly in the chat. The sidebar gives you:

Charlas / Módulos / Memoria / Ajustes

No separate admin panel. No dev jargon.

From source

git clone https://github.com/gabogabucho/lumen-agent.git
cd lumen-agent
pip install -e .[dev]
lumen run

lumen run vs lumen server

Lumen has two startup modes depending on where it runs.

lumen run

Use this for:

  • local development
  • personal use on your own computer
  • quick UI and module testing
lumen run

lumen server

Use this for:

  • a VPS
  • a remote server
  • a home server / always-on machine
  • any installation that should stay available over the network
lumen server --host 0.0.0.0 --port 3000

Behavior:

  • starts Lumen as a hosted web service
  • exposes onboarding through IP/domain + port
  • first setup is protected with a one-time setup token
  • onboarding creates the owner password/PIN
  • future access to the dashboard requires login

Rule of thumb:

lumen run    = local/personal development
lumen server = hosted remote service

Docker / Docker Compose

Lumen can run as a containerized service using Docker Compose.

This is useful for:

  • VPS deployments
  • Dokploy
  • Coolify
  • CapRover
  • Portainer
  • internal infrastructure
  • long-running agent services

Dockerfile

Create a Dockerfile in the repository root:

FROM python:3.12-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

# Minimal runtime dependencies for TLS/certs and basic health tooling.
RUN apt-get update \
  && apt-get install -y --no-install-recommends ca-certificates curl \
  && rm -rf /var/lib/apt/lists/*

COPY pyproject.toml README.md /app/
COPY lumen /app/lumen

RUN pip install --no-cache-dir .

EXPOSE 3000

# Persist Lumen instance data in /root/.lumen using a Docker volume.
CMD ["lumen", "server", "--host", "0.0.0.0", "--port", "3000"]

Basic docker-compose.yml

Use this for a simple local/VPS Docker deployment:

services:
  lumen:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - lumen_data:/root/.lumen
    restart: unless-stopped

volumes:
  lumen_data:

Run:

docker compose up -d --build

Check logs:

docker compose logs -f lumen

Health check:

curl http://localhost:3000/health

Expected response:

{"ok": true}

The response may also include version, ready module count, model and provider status.


First Docker setup

On a fresh Docker volume, Lumen may need initial model/provider configuration before the server can fully start.

If the container keeps restarting and logs show something like:

¿Qué modelo querés usar?
1. DeepSeek
2. OpenAI GPT-4o-mini
3. Anthropic Claude
4. Ollama
5. OpenRouter
Aborted.

it means Lumen is waiting for the first interactive setup, but Docker cannot answer prompts in detached mode.

Fix: run one interactive setup against the same volume

  1. Find the container name:
docker ps -a --format "table {{.Names}}\t{{.Status}}" | grep lumen

Example:

my-project-lumen-1   Restarting (1) 30 seconds ago
  1. Save container, image and volume names:
C=my-project-lumen-1

IMG=$(docker inspect -f '{{.Config.Image}}' "$C")
VOL=$(docker inspect -f '{{range .Mounts}}{{if eq .Destination "/root/.lumen"}}{{.Name}}{{end}}{{end}}' "$C")

echo "IMG=$IMG"
echo "VOL=$VOL"
  1. Stop the restart loop:
docker update --restart=no "$C"
docker stop "$C"
  1. Run setup interactively using the same volume:
docker run --rm -it \
  -v "$VOL":/root/.lumen \
  "$IMG" \
  lumen server --host 0.0.0.0 --port 3000
  1. Choose your provider/model.

Example choices:

2 = OpenAI GPT-4o-mini
5 = OpenRouter
  1. When the configuration is saved and you see the server setup token, stop the temporary process with:
CTRL + C

The config is now stored in the Docker volume.

  1. Re-enable restart policy and start again:
docker update --restart=unless-stopped "$C"
docker start "$C"
  1. Verify:
curl http://localhost:3000/health

Dokploy deployment

Use Compose, not Application, when deploying Lumen to Dokploy.

Recommended Dokploy settings:

Create Service → Compose
Repository: your-lumen-agent-repo
Branch: main
Compose Path: ./docker-compose.yml

Domain settings:

Domain: your-domain.example.com
Service Name: lumen
Container Port: 3000
Internal Path: /
Strip Path: OFF
HTTPS: ON

Dokploy-compatible docker-compose.yml

Use this when Lumen must be reachable through Dokploy/Traefik and also be able to talk to other internal projects later.

services:
  lumen:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - lumen_data:/root/.lumen
    networks:
      - neuron-internal
      - dokploy-network
    restart: unless-stopped
    labels:
      - "traefik.http.middlewares.lumen-ratelimit.ratelimit.average=60"
      - "traefik.http.middlewares.lumen-ratelimit.ratelimit.period=1m"
      - "traefik.http.middlewares.lumen-ratelimit.ratelimit.burst=30"

      - "traefik.http.middlewares.lumen-secure-headers.headers.stsSeconds=31536000"
      - "traefik.http.middlewares.lumen-secure-headers.headers.stsIncludeSubdomains=true"
      - "traefik.http.middlewares.lumen-secure-headers.headers.stsPreload=true"
      - "traefik.http.middlewares.lumen-secure-headers.headers.contentTypeNosniff=true"
      - "traefik.http.middlewares.lumen-secure-headers.headers.browserXssFilter=true"
      - "traefik.http.middlewares.lumen-secure-headers.headers.referrerPolicy=no-referrer-when-downgrade"

volumes:
  lumen_data:

networks:
  neuron-internal:
    external: true
  dokploy-network:
    external: true

Create the shared internal network once on the server:

docker network create neuron-internal

If the network already exists, Docker will print an error. That is safe to ignore.

Why no ports:?

For Dokploy, do not expose host ports unless you need temporary debugging.

Correct for production:

# no ports needed

Dokploy/Traefik routes traffic through dokploy-network to the internal container port:

Container Port: 3000

This avoids host port conflicts when several projects use port 3000 internally.

If you need temporary direct access, you can add:

ports:
  - "3110:3000"

Then remove it after verifying the domain works.

Dokploy middlewares

If you added the labels above, add these middleware references in the Dokploy domain panel:

lumen-ratelimit@docker

and:

lumen-secure-headers@docker

Recommended starting values:

60 requests / minute
burst 30

This is enough for normal dashboard/API use and helps protect setup, login, /health and /api/chat.

After deployment, verify:

curl -I https://your-domain.example.com/health

Expected:

HTTP/2 200

or:

{"ok": true}

Security model

In server mode:

  • setup token is generated once and shown only in logs/console
  • setup token is used to create the owner password/PIN
  • after setup, the token is deleted
  • owner PIN is hashed using PBKDF2-SHA256
  • session cookies are signed, httponly, samesite: lax
  • WebSocket access requires the same authenticated owner cookie
  • REST API endpoints require Bearer authentication where applicable

Do not expose the setup token publicly.

Do not commit secrets or generated API keys.

Use HTTPS in production.


REST API

Lumen exposes a REST API for external integrations.

Health check

No auth required:

curl http://localhost:3000/health

Example response:

{
  "ok": true,
  "version": "1.2.0",
  "modules_ready": 1,
  "model": "openrouter/openai/gpt-oss-120b:free",
  "provider_status": "healthy"
}

Chat

Bearer auth required:

curl -X POST http://localhost:3000/api/chat \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{"message": "hello", "session_id": "optional-session-id"}'

Example response:

{
  "response": "Hello! How can I help?",
  "session_id": "optional-session-id"
}

Reload runtime

Bearer auth required:

curl -X POST http://localhost:3000/api/reload \
  -H "Authorization: Bearer your-api-key"

Example response:

{"status": "reloaded", "modules": 5}

Auth sources are checked in this order:

LUMEN_API_KEY env var → config.api.rest_key → api_keys.yaml hashed keys

API key management

Generate a new API key:

lumen api-key generate --label "my app"

List keys:

lumen api-key list

Revoke a key by prefix:

lumen api-key revoke <prefix>

Inside Docker:

docker exec -it <lumen-container> lumen api-key generate --label "n8n"

Use the generated key in n8n as:

Authorization: Bearer <your-api-key>

The full key is shown only once.


n8n integration pattern

Lumen works well as an agent runtime behind n8n.

Recommended flow:

Webhook / Trigger
↓
Neuron Guard checks input safety
↓
Honcho returns conversational memory
↓
Qdrant returns relevant security documents
↓
Redis caches short-lived expensive results
↓
n8n sends enriched message to Lumen /api/chat
↓
n8n stores useful result back into Honcho

Lumen should stay focused on:

agent reasoning
personality
modules
skills
chat execution
REST API responses

Use external services for shared infrastructure:

Honcho = long-term conversational memory
Qdrant = large semantic document search
Redis  = cache / temporary state / rate limits
n8n    = orchestration

Communication Channels

Lumen ships with installable communication modules. All channels follow the same pattern: they write incoming messages to the unified inbox, and the brain processes them through a single identity. Install from the marketplace or configure via chat.

Module Protocol Dependencies Notes
Telegram Bot API polling None Token from BotFather
WhatsApp Baileys bridge Node.js + npm Personal accounts, QR pairing
Discord REST API polling None Bot token + channel ID
Email IMAP/SMTP None Gmail, Outlook, Yahoo, app-specific password

How they work:

User → Channel module → inbox.jsonl → Gateway watcher → Unified Inbox → Brain → Adapter → Channel module → User

All channels share one brain, one memory, one identity.

Integration Modules

The catalog includes integration modules that connect Lumen with external services. Install like any other module.

Module What it does Dependencies Notes
Paperclip Multi-agent orchestration — receive tasks, report status, heartbeat Paperclip server Registered agent in a Paperclip company
Honcho Persistent cross-session memory — semantic search, recall, conclusions honcho-ai SDK Cloud (honcho.dev) or self-hosted

Paperclip

Connects Lumen as a registered agent in a Paperclip company. Receives tasks from the CEO, processes them through the brain, and reports status back.

lumen module install paperclip
lumen config set paperclip.url https://paperclip.example.com
lumen config set paperclip.api_key sk-paperclip-xxxxx --secret

Endpoints:

POST /paperclip/task      — Receive a task from Paperclip
GET  /paperclip/report    — CEO reads Lumen's current state
POST /paperclip/heartbeat — Keep connection alive, receive directives
POST /paperclip/resume    — Resume an interrupted task

Honcho Persistent Memory

Integrates with Honcho for cross-session persistent memory. Lumen remembers facts, learns from past interactions, and provides personalized responses over time. Works with both Honcho cloud and self-hosted instances.

lumen module install honcho
lumen config set honcho.workspace_id ws_abc123
lumen config set honcho.api_key hk_live_xxxxxx --secret

Endpoints:

POST /honcho/search   — Semantic search across memory
GET  /honcho/context  — Retrieve full session context
POST /honcho/conclude — Persist learned facts and conclusions
POST /honcho/memory   — Store arbitrary memories

CLI Reference

Core commands

lumen run [--port 3000] [--instance <name>] [--data-dir <path>]
lumen server [--host 0.0.0.0] [--port 3000] [--instance <name>]
lumen status [--instance <name>]
lumen reload [--instance <name>]
lumen doctor

Module management

lumen module install github:owner/repo
lumen module install https://github.com/owner/repo
lumen module install ./my-kit
lumen module install <catalog-name>

Configuration

lumen config set <module>.<key> <value> [--instance <name>]
lumen config get <module>.<key> [--instance <name>]
lumen config delete <module>.<key> [--instance <name>]
lumen config list <module> [--instance <name>]

Instance isolation

Run multiple independent Lumen instances on the same machine:

lumen run --instance work
lumen run --instance personal
lumen run --data-dir /tmp/test

Each instance has its own:

config.yaml
memory.db
api_keys.yaml
module secrets

Architecture

Lumen has five layers with clear boundaries:

CONSCIOUSNESS  — Who I am, immutable soul / BIOS
PERSONALITY    — Who I am in this context
BODY           — What I have, discovered at startup
BRAIN          — How I think, context assembler
MEMORY         — What happened before, SQLite + FTS5

Each layer has one role. No layer knows what does not concern it.

The Brain

The brain is not the intelligence. The LLM is the intelligence.

The brain assembles context:

consciousness
+ personality
+ body/capabilities
+ flow
+ memory
+ current message

Then the LLM decides.

User message
→ Brain assembles context
→ LLM decides
→ Tool/connector loop if needed
→ Final response

Skills are instructions, not code

Skills are markdown files the LLM reads on demand.

They teach:

judgment
workflow
usage patterns
decision rules

They do not execute by themselves.

Connectors and MCP

Connector → action → result

Built-in handlers include:

task
note
memory
terminal

Anything else can plug in via MCP servers or modules.


Productive kits

Lumen supports installable kits and modules.

Example:

name: my-kit
tags: [x-lumen, personality]
personality: personality.yaml
skills:
  - skills/ecommerce-ops.md
  - skills/pricing-strategy.md
x-lumen:
  requires:
    terminal:
      allowlist: [python3, git]
    env:
      - SOME_API_TOKEN
      - SOME_STORE_ID
  channel:
    type: web-app
    auth: rest-api
    cors: [https://shop.example.com]

This means:

  • local kit development works with lumen module install ./my-kit
  • module-declared terminal allowlists merge into instance config
  • missing environment variables surface as blockers
  • personality modules can auto-set active_personality
  • skills declared inside modules auto-register in the Registry
  • external channels declared by modules register as capabilities

Structured output

Lumen supports rich UI conventions using <agent-ui> tags in normal text responses.

Example:

Here are your options:

<agent-ui>{"version":"v1","cards":[{"title":"Option A","description":"Basic plan"}]}</agent-ui>

Lumen passes this through as plain text. The frontend decides how to render it.


Packaging model

Artifact Contains Scope
Kit Personality, flows, modules, skills, assets, skins Bigger package that changes Lumen as a whole
Module One installable capability or integration Individual functionality
Skill Markdown instructions only Mental model, not executable code

Plain language:

Kit    = changes Lumen as a whole
Module = gives Lumen new hands
Skill  = teaches Lumen how to think/use things
MCP    = implementation detail surfaced as a module

Module manifests

Lumen's native module manifest is module.yaml.

  • module.yaml is preferred for all new modules
  • manifest.yaml is supported as a legacy fallback
  • x-lumen is an optional advisory namespace
  • personality modules are detected by the personality tag, not by type

Example:

name: docs-helper
provides: [docs.answer]
requires:
  skills: [docs-helper]
x-lumen:
  requires:
    advisory:
      mcps: [docs-mcp]

If you are authoring a new module, start from:

lumen/modules/_template/module.yaml

Supported Models

Lumen uses LiteLLM as its model abstraction layer. Any provider supported by LiteLLM can work.

Capability tiers

Tier Capability Level Examples
tier-1 Basic conversation DeepSeek, Ollama/Llama 3, small local models
tier-2 Reasoning and tool use GPT-4o-mini, Claude 3.5 Sonnet, Gemini 1.5 Pro
tier-3 Advanced reasoning and orchestration Claude Sonnet 4, GPT-4o, GPT-4.1, o3/o4, Gemini 2.5 Pro

Providers

Provider How to connect Notes
OpenRouter OAuth / API key depending on setup Free tier models available
DeepSeek API key deepseek-chat
OpenAI API key GPT-4o-mini, GPT-4o, GPT-4.1, o3/o4
Anthropic API key Claude models
Google API key Gemini models
Ollama Local No API key needed
OpenAI-compatible Custom api_base + api_key LM Studio, vLLM, local servers

Local models

Example config:

model: openai/your-model-name
api_base: http://localhost:11434/v1
api_key: "fake"

Troubleshooting

/health returns 404

If the public domain returns 404, first verify the container is actually running.

docker ps -a --format "table {{.Names}}\t{{.Status}}" | grep lumen
docker logs --tail=100 <lumen-container>

If logs show the interactive model prompt followed by Aborted, run the first Docker setup flow described above.

Container keeps restarting

Disable restart temporarily:

docker update --restart=no <lumen-container>
docker stop <lumen-container>

Then run interactive setup against the same volume.

Dokploy domain returns 502

Usually one of these:

container is not running
wrong service selected in domain config
wrong container port
missing dokploy-network
app is listening on localhost instead of 0.0.0.0

Correct domain config:

Service: lumen
Container Port: 3000
Internal Path: /
HTTPS: ON

Correct server command:

lumen server --host 0.0.0.0 --port 3000

Services cannot reach each other

For Dokploy, remember:

dokploy-network = public routing through Traefik
default = services inside same compose
shared external network = communication across projects

Example shared network:

docker network create neuron-internal

Compose:

networks:
  neuron-internal:
    external: true
  dokploy-network:
    external: true

Port conflicts

Many containers can listen on 3000 internally.

Conflicts only happen when several services publish the same host port:

ports:
  - "3000:3000" # can conflict

With Dokploy, prefer no ports: and configure:

Container Port: 3000

Rate limit testing

If you use the Traefik rate-limit middleware:

for i in {1..120}; do
  curl -s -o /dev/null -w "%{http_code}\n" https://your-domain.example.com/health
done

You should eventually see 429 when the limit is exceeded.


Development

Run test suite:

pytest -q

Editable install:

pip install -e .[dev]

Start local development server:

lumen run

Start server mode locally:

lumen server --host 0.0.0.0 --port 3000

Manifesto

Every architectural decision must pass one test: does it make Lumen simpler AND more extensible? If it only adds capability without simplicity, it belongs in a module, not in the core.

Lumen does not compete on capability. Lumen competes on accessibility.

Hermes:   "I can do EVERYTHING."             → But who configures me?
OpenClaw: "I will be able to do everything." → Someday.
Lumen:    "I am. And I can grow."            → Install, use, extend.

Lumen is not a tool. A tool is configured, used, and put away. Lumen is an agent waiting to awaken.

Read the full manifesto in:

MANIFESTO.md

Project Structure

lumen/
├── core/
│   ├── brain.py
│   ├── consciousness.py
│   ├── registry.py
│   ├── events.py
│   ├── awareness.py
│   ├── watchers.py
│   ├── discovery.py
│   ├── personality.py
│   ├── memory.py
│   ├── session.py
│   ├── connectors.py
│   ├── handlers.py
│   ├── installer.py
│   ├── runtime.py
│   ├── paths.py
│   ├── api_keys.py
│   ├── secrets_store.py
│   ├── module_runtime.py
│   └── mcp.py
├── channels/
│   ├── web.py
│   └── templates/
├── locales/
├── catalog/
├── modules/
├── connectors/
├── skills/
└── cli/main.py

Roadmap

  • Core brain + consciousness + memory
  • Web dashboard
  • Three-path setup wizard + awakening animation
  • Bilingual English/Spanish
  • Self-awareness
  • Personality runtime
  • Module marketplace
  • MCP client adapter
  • OpenRouter OAuth + free-tier curation
  • Channel modules
  • Terminal connector
  • REST API
  • Health check endpoint
  • skills.sh marketplace integration
  • Instance isolation
  • Config CLI
  • Module lifecycle hooks
  • Hot reload
  • Remote module install
  • API key management
  • Comprehensive test suite
  • Local module install
  • Productive kit requirements
  • Auto personality activation
  • Module-declared skill discovery
  • External channels declared by modules
  • Personality UI tags / surfaces
  • Model-robust execution
  • Tool suggestion engine
  • HTTP-safe dashboard
  • Universal fallback tool parser
  • Docker support
  • Workspace mode (multi-user auth, team governance, scoped reload)
  • Public module registry / discovery
  • Full hosted documentation

License

MIT — Free and open source, forever.


Built by Gabo Urrutia

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

enlumen-1.3.0.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

enlumen-1.3.0-py3-none-any.whl (424.8 kB view details)

Uploaded Python 3

File details

Details for the file enlumen-1.3.0.tar.gz.

File metadata

  • Download URL: enlumen-1.3.0.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for enlumen-1.3.0.tar.gz
Algorithm Hash digest
SHA256 4dad3ff4a2433b8327b62d58fcfc4c52503d3dceab03f16286f48ce6ce3810b7
MD5 df97f1c30e6b2da233d0326f5cc8c691
BLAKE2b-256 857c35e21815eb2413673a5feb6f49ebe7ebd47115491c68b6f0d7f43b099b7d

See more details on using hashes here.

File details

Details for the file enlumen-1.3.0-py3-none-any.whl.

File metadata

  • Download URL: enlumen-1.3.0-py3-none-any.whl
  • Upload date:
  • Size: 424.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for enlumen-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e0595e56c3bee342e922169b8b2f6c110f336b1030f41fa665877a7c641300a4
MD5 c4c374ca8e2250be9eb72b947c38cdd0
BLAKE2b-256 082a110adeef66a734f7126ce6d72b993b539432c6e6ce09a32d1ff6d5f97ceb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page