AI Worker Runtime — run AI assistants anywhere, no Docker required
Project description
rtlinux — AI Worker Runtime
Run AI assistants anywhere. No cloud. No Docker. Just Python.
pip install rtlinux → set your API key → your assistant is running.
Quick Start
pip install rtlinux
export ANTHROPIC_API_KEY=sk-ant-...
# Start a general-purpose assistant
rtlinux start assistant
# Chat with it
rtlinux chat assistant
# Or open mobile/browser UI
rtlinux webchat --worker assistant
That's it. Your assistant is now running in the background, listening on a Unix socket, ready to answer questions and execute commands on your machine.
What is this?
rtlinux is a lightweight runtime for AI assistants. Each assistant is a Python process that:
- Listens on a Unix socket for messages
- Calls an LLM (Claude, GPT-4, Codex, or local Ollama)
- Executes bash commands via tool-calling (with a dangerous-command blocklist)
- Runs scheduled background tasks (cron-style)
- Serves a mobile-friendly web chat UI
It runs on your laptop, your VPS, your Raspberry Pi — anywhere with Python 3.8+.
Install
pip install rtlinux
Requirements: Python 3.8+, an API key for your LLM of choice.
Supported backends:
claude-*models — needsANTHROPIC_API_KEYgpt-*/o1-*models — needsOPENAI_API_KEYclaude-code— uses Claude Code CLI (claude -p)codex— uses OpenAI Codex CLIcursor/...— uses Cursor API (CURSOR_API_KEY)ollama/...— local models via Ollama (no key needed)
Built-in Templates
| Template | Description | Model |
|---|---|---|
assistant |
General-purpose with bash access | claude-haiku-4-5 |
system-monitor |
Server health monitor + alerts | claude-sonnet-4-6 |
news |
Hourly news digest via TTS | claude-haiku-4-5 |
research |
Web research + summarization | claude-sonnet-4-6 |
email |
Email monitor and responder | claude-sonnet-4-6 |
code-reviewer |
Git diff reviewer | claude-sonnet-4-6 |
mac-assistant |
macOS: Calendar, Mail, AppleScript | claude-haiku-4-5 |
rtlinux templates list
Usage
Start a worker
# From a built-in template
rtlinux start assistant
# From a directory with template.yaml
rtlinux start ./my-bot/
# With extra env vars
rtlinux start assistant --env ANTHROPIC_API_KEY=sk-ant-... --env CUSTOM_VAR=value
Talk to it
# Interactive chat (REPL)
rtlinux chat assistant
# One-shot question
rtlinux ask assistant "what's using the most memory right now?"
# Web/mobile UI (http://localhost:8080)
rtlinux webchat --worker assistant
# Expose publicly via ngrok
rtlinux webchat --worker assistant --ngrok
Manage workers
# List running workers
rtlinux ps
# Stop a worker
rtlinux stop assistant
# Restart
rtlinux restart assistant
Create Your Own Assistant
# Scaffold from template
rtlinux init my-bot --template assistant
# Edit the config
$EDITOR my-bot/template.yaml
# Run it
rtlinux start my-bot
template.yaml format:
name: my-bot
description: "My custom assistant"
model: claude-haiku-4-5 # or gpt-4o, ollama/llama3, codex, cursor/...
system_prompt: |
You are a specialized assistant. You have bash access.
Your job: [describe what it does]
packages: # extra pip packages to install
- anthropic
- feedparser
schedule: "every 1h" # run schedule_task periodically
schedule_task: |
[what to do on the schedule]
on_message: "Help with the user's task." # instruction for chat messages
env: # required environment variables
- ANTHROPIC_API_KEY
Python API
import rtlinux
# Start a worker
result = rtlinux.worker_up("assistant")
print(result)
# {'status': 'running', 'name': 'assistant', 'pid': 12345, ...}
# Ask a question
reply = rtlinux.ask_worker("assistant", "how much disk space is free?")
print(reply)
# Interactive chat
rtlinux.chat_worker("assistant")
# Stop a worker
rtlinux.worker_down("assistant")
# List running workers
result = rtlinux.worker_ps()
for w in result["workers"]:
print(w["name"], w["status"])
macOS: Persistent Daemon (survives reboots)
# Install as launchd service (auto-starts on login)
rtlinux mac-up ./my-bot/template.yaml
# List launchd workers
rtlinux mac-ps
# Stop
rtlinux mac-down my-bot
Multi-Machine (MeshPOP Fleet)
rtlinux can also run as part of a MeshPOP fleet — each worker gets a VPN IP via WireGuard and appears in mpop servers. This is optional and not required for standalone use.
See mpop.dev for fleet documentation.
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rtlinux-0.5.0.tar.gz.
File metadata
- Download URL: rtlinux-0.5.0.tar.gz
- Upload date:
- Size: 45.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cc57651538c523ec9c66a92dcb5744727dff917f04fcf5a58ccd62d0bb4bcc76
|
|
| MD5 |
5c22dda0ee549eebff576f4eef30b3b2
|
|
| BLAKE2b-256 |
4e8e57ca5443e6b3b4e706805303e0a546abb5ccabdfd548300951c668090c35
|
File details
Details for the file rtlinux-0.5.0-py3-none-any.whl.
File metadata
- Download URL: rtlinux-0.5.0-py3-none-any.whl
- Upload date:
- Size: 50.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e1b87361be25fd0837dbc5a7c53e70254e14591c323b81f78884669c14f2eec
|
|
| MD5 |
e771c3e9b298494af4d8cd11f208c227
|
|
| BLAKE2b-256 |
cc2b25fcde9e78968c3d989501a4183cb39b93ab1bd6d4b0e27f8b5e9ee8d89c
|