Angie — 24/7 personal AI assistant
Project description
Angie — Personal AI Assistant
Your 24/7 personal AI assistant. Always on. Always working.
Angie is a self-hosted, event-driven AI assistant that runs as a persistent background daemon on your infrastructure. Angie connects to your real-world tools — email, calendar, smart home devices, music, GitHub, Slack, Discord, and more — and acts on your behalf through a fleet of specialized agents powered by GitHub Copilot's LLM.
Unlike chat-only AI tools, Angie is proactive and persistent: Angie wakes up on a schedule, monitors your channels, executes multi-step workflows, and reports back without being asked. Angie remembers context about you through a layered prompt hierarchy, so every interaction is personalized to your preferences, communication style, and routines.
What Angie can do
- Real-time chat interface — Chat directly with Angie and your fleet of AI agents from the terminal or the web UI.
- Developer workflows — Query GitHub issues, open PRs, summarize repository activity, and turn issues into pull requests autonomously.
- Web browsing — Browse URLs, take screenshots, extract content, and summarize web pages.
- Weather — Get current conditions, forecasts, and severe weather alerts.
- Unified inbox — (planned) Connect Slack, Discord, iMessage, and email in one place. Angie routes incoming messages to the right agent automatically.
- Scheduled tasks — Set cron jobs that run agents on a schedule ("every weekday at 8am, summarize my email and post to Slack").
- Multi-step workflows — Chain agents together: check calendar → summarize emails → control smart lights → send morning briefing.
- Smart home control — (planned) Adjust Philips Hue lighting and Home Assistant automations via natural language.
- Media control — (planned) Control Spotify playback, switch playlists, and manage your queue.
- Network management — (planned) Inspect your UniFi network, connected devices, and bandwidth stats.
- Personalized context — Onboarding builds a private profile (personality, communication style, preferences) that shapes every LLM interaction.
- REST API + Web UI — Full FastAPI backend with a Next.js dashboard for managing agents, teams, workflows, tasks, and events in real time.
Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ Channels │
│ Slack · Discord · iMessage · Email · Angie UI Chat │
└───────────────────────────┬─────────────────────────────────────┘
│ Events: infer task from user input
▼
┌─────────────────────────────────────────────────────────────────┐
│ Angie Daemon Loop │
│ EventRouter → TaskDispatcher → Celery Queue → Worker │
│ ↑ ↑ │
│ CronEngine AgentRegistry │
└─────────────────────────────────────────────────────────────────┘
│ Task: units of work, assigned to agents/teams
▼
┌─────────────────────────────────────────────────────────────────┐
│ Agent Fleet │
│ System: cron · task-manager · workflow-manager · event-manager │
│ Dev: github · software-dev │
│ Productivity: web Lifestyle: weather │
└─────────────────────────────────────────────────────────────────┘
│ Agents & Teams: single teams of AI agents
▼
┌───────────┐ ┌───────────────────┐ ┌───────────────────────┐
│ MySQL │-->│ FastAPI (API) │<->│ Redis Cache │
│ DB │<--│ /api/v1/* │<->│ Celery broker │
└───────────┘ └───────────────────┘ └───────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Next.js Web UI (frontend/) │
│ Dashboard · Agents · Teams · Workflows · Tasks · Events │
│ Real-time Chat · Settings │
└─────────────────────────────────────────────────────────────────┘
Key Concepts
| Concept | Description |
|---|---|
| Event | Any trigger: user message, cron tick, webhook, task completion |
| Task | A unit of work dispatched from an event to an agent |
| Agent | A pydantic-ai powered worker that can handle specific task types |
| Team | A named group of agents that collaborate on related tasks |
| Workflow | An ordered sequence of steps across agents/teams for a goal |
| Prompt Hierarchy | SYSTEM → ANGIE → AGENT/USER — layered context fed to every LLM call |
Tech Stack
| Layer | Technology |
|---|---|
| Language | Python 3.12, uv |
| AI framework | pydantic-ai (GitHub Models, OpenAI, Anthropic) |
| API | FastAPI + SQLAlchemy 2.0 async |
| Database | MySQL 8 (aiomysql driver) |
| Cache / Queue | Redis + Celery |
| Scheduler | APScheduler |
| Channels | Slack SDK · discord.py · BlueBubbles REST · SMTP/IMAP |
| CLI | Click |
| Frontend | Next.js 15 · TypeScript · Tailwind CSS v3 |
| Build | PyInstaller (standalone angie binary) |
| CI/CD | GitHub Actions |
| Dev | Docker Compose, Ruff, pytest |
Quick Start
Prerequisites
- Python 3.12+, uv
- Docker Desktop (for MySQL + Redis)
- Node.js 20+ (for frontend)
1. Clone and install
git clone https://github.com/your-org/angie.git
cd angie
make install # uv sync --dev --all-extras
cp .env.example .env # fill in SECRET_KEY, DB_PASSWORD, GITHUB_TOKEN
2. Start backing services
make docker-up # starts MySQL + Redis containers
make migrate # runs Alembic migrations
3. Run Angie
# Start the Angie daemon (event loop only)
angie daemon
# Or start services individually:
uvicorn angie.api.app:create_app --factory --reload # API on :8000
celery -A angie.queue.celery_app worker -l info # Worker
# Frontend dev server:
cd frontend && npm run dev # UI on :3000
4. Onboarding
On first run, Angie needs to learn about you:
angie setup
This asks a series of questions and generates personalized prompts/user/<id>/ markdown files that become part of every LLM interaction.
CLI Reference
angie --help
# Daemon
angie daemon # Start background daemon (event loop)
# One-shot queries
angie ask "What's on my calendar today?"
# Interactive chat
angie chat "Hello Angie" # Chat from the terminal
angie chat --agent github "List PRs" # Route directly to a specific agent
# Channel configuration
angie config slack # Set Slack bot token
angie config discord # Set Discord bot token
angie config imessage # Set BlueBubbles URL
angie config email # Set SMTP/IMAP credentials
angie config channels # Show status of all configured channels
# Unified configuration wizard
angie configure keys slack # Set API keys for a service
angie configure list # Show all configured keys grouped by service
angie configure model # Select the LLM model
angie configure seed # Seed the database with demo data
# Onboarding
angie setup # Interactive first-run onboarding
# Status
angie status # Show active tasks, registered agents
Agent Fleet
Agents are pydantic-ai powered workers. Each declares:
slug— unique identifier used for routingcapabilities— keywords that trigger auto-selectionexecute(task)— async method that does the work
Built-in Agents
| Slug | Category (group) | Description |
|---|---|---|
cron |
System | Create, delete, and list cron scheduled tasks |
task-manager |
System | List, cancel, and retry Angie tasks |
workflow-manager |
System | Manage and trigger Angie workflows |
event-manager |
System | Query, filter, and manage Angie events |
github |
Dev | GitHub repository and PR management |
software-dev |
Dev | Turn GitHub issues into pull requests autonomously |
web |
Productivity | Browse URLs, take screenshots, extract and summarize web pages |
weather |
Lifestyle | Weather conditions, forecasts, and severe weather alerts |
More agents are planned — see the environment variables section below for services Angie will support.
Adding a New Agent
# src/angie/agents/my_category/my_agent.py
from angie.agents.base import BaseAgent
class MyAgent(BaseAgent):
name = "My Agent"
slug = "my-agent"
description = "Does something useful"
category = "General"
capabilities = ["useful", "something"]
async def execute(self, task: dict) -> dict:
# Use the underlying SDK / API here
return {"status": "success", "result": "done"}
Then add to AGENT_MODULES in src/angie/agents/registry.py.
Teams & Workflows
Creating a Team (API)
curl -X POST http://localhost:8000/api/v1/teams/ \
-H "Authorization: Bearer $TOKEN" \
-d '{"name":"Dev Team","slug":"dev","agent_slugs":["github","software-dev"]}'
Defining a Workflow (API)
curl -X POST http://localhost:8000/api/v1/workflows/ \
-H "Authorization: Bearer $TOKEN" \
-d '{
"name": "Morning Briefing",
"slug": "morning-briefing",
"trigger_event": "cron",
"steps": [
{"order": 1, "name": "Check weather", "agent_slug": "weather"},
{"order": 2, "name": "Browse news", "agent_slug": "web"},
{"order": 3, "name": "Report status", "agent_slug": "task-manager"}
]
}'
Prompt Hierarchy
Every LLM call is built from layered prompts:
prompts/
system.md ← Core safety/persona rules (SYSTEM_PROMPT)
angie.md ← Angie's personality and behavior (ANGIE_PROMPT)
user/<id>/ ← Generated from onboarding (USER_PROMPTS)
personality.md
communication.md
preferences.md
For agent tasks: SYSTEM → ANGIE → AGENT_PROMPT
For user interactions: SYSTEM → ANGIE → USER_PROMPTS
Reconfigure at any time:
angie setup # re-run full onboarding
Development
make help # list all targets
make install # install deps
make check # lint + format check
make fix # auto-fix lint + format
make md-check # check Markdown formatting
make md-fix # auto-format Markdown files
make test # run all tests (unit + e2e)
make test-cov # with coverage report
make test-single K=test_name # single test
make migrate-new MSG="add column" # new migration
make migrate # apply migrations
make docker-up # start MySQL + Redis
make docker-down # stop
make docker-reset # stop + wipe volumes
make build # PyInstaller standalone binary → dist/angie
make dist # build sdist + wheel into dist/
make clean-dist # remove distribution artifacts
make clean # remove all build artifacts
Project Structure
angie/
├── src/angie/
│ ├── agents/ # Agent implementations + registry + teams
│ ├── api/ # FastAPI app + routers
│ ├── cache/ # Redis client + @cached decorator
│ ├── channels/ # Slack, Discord, iMessage, Email, WebChat
│ ├── cli/ # Click CLI commands
│ ├── core/ # Events, tasks, prompts, cron, loop, feedback
│ ├── db/ # SQLAlchemy session + generic repository
│ ├── models/ # SQLAlchemy ORM models
│ └── queue/ # Celery app + workers
├── frontend/ # Next.js 15 web UI
├── tests/
│ ├── e2e/ # End-to-end flow tests
│ ├── integration/ # (future) API integration tests
│ └── unit/ # Unit tests
├── prompts/ # Jinja2 prompt templates
├── alembic/ # DB migrations
├── docker/ # Dockerfiles
├── docker-compose.yml
├── Makefile
└── angie.spec # PyInstaller spec
Environment Variables & API Key Permissions
Copy .env.example to .env. The sections below describe every credential, where to obtain it, and the exact permissions/scopes required.
Core (required)
| Variable | Description |
|---|---|
SECRET_KEY |
JWT signing secret. Generate with openssl rand -hex 32. Keep this private. |
DB_PASSWORD |
MySQL password for the angie database user. |
LLM Provider Selection
Angie supports three LLM providers. Set LLM_PROVIDER to choose:
| Provider | LLM_PROVIDER |
Required Variables |
|---|---|---|
| GitHub Models API (default) | github |
GITHUB_TOKEN, COPILOT_MODEL |
| OpenAI | openai |
OPENAI_API_KEY, COPILOT_MODEL |
| Anthropic Claude | anthropic |
ANTHROPIC_API_KEY, ANTHROPIC_MODEL |
LLM — GitHub Models API (default)
| Variable | Description |
|---|---|
GITHUB_TOKEN |
GitHub OAuth token used to obtain a short-lived Copilot session token. |
COPILOT_MODEL |
Model to use (default: gpt-4o). Other options: gpt-4o-mini, o1-mini. |
COPILOT_API_BASE |
Copilot OpenAI-compatible endpoint (default: https://api.githubcopilot.com). |
How to get GITHUB_TOKEN:
- Go to GitHub → Settings → Developer Settings → Personal Access Tokens → Tokens (classic)
- Click Generate new token (classic)
- Under Scopes, enable:
read:user— required to identify your account- No additional scopes are needed; Copilot access is governed by your GitHub Copilot subscription, not token scopes
- You must have an active GitHub Copilot Individual, Business, or Enterprise subscription
- Paste the
ghp_...token asGITHUB_TOKEN
Alternative: Run
gh auth tokenafter authenticating with the GitHub CLI to get a token that already has the right access.
OpenAI
| Variable | Description |
|---|---|
OPENAI_API_KEY |
OpenAI API key (sk-...). |
COPILOT_MODEL |
Model to use (default: gpt-4o). |
Get from: platform.openai.com → API Keys → Create new secret key. No special permissions needed — any key with access to gpt-4o works.
Anthropic Claude
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
Anthropic API key (sk-ant-...). |
ANTHROPIC_MODEL |
Model to use (default: claude-sonnet-4-20250514). Other options: claude-opus-4-20250514, claude-haiku-4-20250514. |
Get from: console.anthropic.com → API Keys → Create Key.
Slack (optional)
| Variable | Description |
|---|---|
SLACK_BOT_TOKEN |
Bot OAuth token (xoxb-...) — for posting messages and reading channel events |
SLACK_APP_TOKEN |
App-level token (xapp-...) — required for Socket Mode (real-time events without a public URL) |
SLACK_SIGNING_SECRET |
Used to verify that incoming webhooks are from Slack |
How to create a Slack app:
- Go to api.slack.com/apps → Create New App → From scratch
- Under OAuth & Permissions → Bot Token Scopes, add:
channels:history— read messages in public channelschannels:read— list channelschat:write— post messages as the botim:history— read direct messagesim:read— list DM conversationsim:write— open DM conversationsusers:read— look up user info (for @-mentioning)app_mentions:read— receive@Angiementions
- Under Event Subscriptions, enable and subscribe to:
message.channels,message.im,app_mention
- Under Socket Mode, enable Socket Mode and generate an App-Level Token with scope
connections:write→ this becomesSLACK_APP_TOKEN - Install the app to your workspace → copy the Bot User OAuth Token →
SLACK_BOT_TOKEN - Under Basic Information → Signing Secret →
SLACK_SIGNING_SECRET
Discord (optional)
| Variable | Description |
|---|---|
DISCORD_BOT_TOKEN |
Bot token from the Discord Developer Portal |
How to create a Discord bot:
- Go to discord.com/developers/applications → New Application
- Under Bot, click Add Bot → copy the Token →
DISCORD_BOT_TOKEN - Under Bot → Privileged Gateway Intents, enable:
- Message Content Intent — required to read message text
- Server Members Intent — required to look up user info
- Presence Intent — optional, for presence-aware responses
- Under OAuth2 → URL Generator, select scopes:
botwith permissions:Send Messages,Read Message History,View Channels,Add Reactions
- Use the generated URL to invite the bot to your server
iMessage via BlueBubbles (optional)
| Variable | Description |
|---|---|
BLUEBUBBLES_URL |
URL of your BlueBubbles server (e.g. https://your-server.ngrok.io) |
BLUEBUBBLES_PASSWORD |
BlueBubbles server password |
Requirements:
- A Mac that stays on with iMessage signed in
- BlueBubbles Server installed and running on that Mac
- A way to expose the server publicly (ngrok, Cloudflare Tunnel, or static IP)
- No Apple credentials needed — BlueBubbles uses the Mac's existing iMessage session
GitHub Agent (optional — separate from Copilot API token)
This should be set from the Connections page in the Angie web UI. If not, we have:
| Variable | Description |
|---|---|
GITHUB_PAT |
Personal Access Token for the GitHub agent (repo queries, PRs, issues) |
This is separate from GITHUB_TOKEN (which is for LLM). Create a fine-grained token at GitHub → Settings → Developer Settings → Fine-grained tokens with:
Contents: Read— read repository filesIssues: Read and Write— create/update issuesPull requests: Read and Write— create/update PRsMetadata: Read— required for all fine-grained tokens
CI/CD
Three GitHub Actions workflows:
ci.yml— runs on every push/PR: lint → format → markdown format → test → security → Docker builddeploy.yml— runs onv*.*.*tags: build PyInstaller binary → GitHub Release + publishangie-aito PyPI (viaPYPI_TOKEN)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file angie_ai-0.1.1.tar.gz.
File metadata
- Download URL: angie_ai-0.1.1.tar.gz
- Upload date:
- Size: 462.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c4bd07ad45e3740936394a520fb1dd86e6f948027cd7de33aabc1fc968d292c
|
|
| MD5 |
8d81f7e42f6a3ea28ed7efba8910ff24
|
|
| BLAKE2b-256 |
a3c0d1c460bdc91ac05ea9496d39b1cedd7370d839eb657e3d0970d221f70913
|
File details
Details for the file angie_ai-0.1.1-py3-none-any.whl.
File metadata
- Download URL: angie_ai-0.1.1-py3-none-any.whl
- Upload date:
- Size: 126.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f514d842da7c31714638be41455eff56aec6c9c891b366a2e4a1fff5daf58306
|
|
| MD5 |
b0da434b57b3e751e364ec237cebe871
|
|
| BLAKE2b-256 |
54bc1703b508f1306b965b15e3974eaf0c60c930f09a042d976e5ebd2c62698b
|