An AI that feels like it knows you -- cognitive architecture with memory as the nervous system.
Project description
Phoenix
An AI that feels like it knows you.
Not a chatbot. Not a memory framework. Not a coding agent. Phoenix is a cognitive architecture where memory is the nervous system -- the AI that actually remembers who you are, notices when you contradict yourself, and picks up where you left off.
Read
principle.mdandvision.mdbefore using, contributing, or forking. They are the founding documents. They are not optional.
What Phoenix Does That Other Agents Do Not
Five behaviors that no other AI agent ships reliably today:
- Resumption. Quit mid-task. Reopen days later. Phoenix picks up exactly where things were -- because it understood what was happening, not just recorded it.
- Correction. Phoenix notices when you contradict yourself and surfaces it respectfully, because the memory gate checks new facts against old ones before storing.
- Ambience. Phoenix speaks up at the right moment without being asked, within a token budget. Not spammy notifications -- genuine noticing.
- Recognition. Phoenix says "you mentioned X" (recognition) instead of "I found a relevant memory" (retrieval). Small phrasing distinction. Massive experiential difference.
- Consolidation. Over weeks, Phoenix notices patterns you did not -- that you ship more on Tuesdays, that you get stuck in the same refactor loop -- and brings them up when useful.
These are not features. They are behaviors that emerge from the architecture. No amount of wrapping OpenAI in a prompt will produce them.
What Phoenix Is Not
- Not a memory framework. mem0, Letta, Zep, LangChain memory are storage with retrieval. Phoenix is cognitive architecture.
- Not a Swiss Army knife. If you need 25+ messaging channels and 50+ LLM providers, use OpenClaw. Phoenix is a scalpel.
- Not a Claude Code clone. Claude Code is a coding agent. Phoenix is a personal agent that happens to be good at coding.
- Not design-by-committee. Contributions welcome. Generic contributions rejected.
Install
# Recommended: install with uv (fast)
uv pip install phoenix-os
# Or with pip
pip install phoenix-os
From source:
git clone https://github.com/harshalmore31/phoenix-os.git
cd phoenix-os
uv pip install -e . # or: pip install -e .
On first run, the setup wizard guides you through model selection and API key setup. Keys are stored in your OS keychain (macOS Keychain / Linux Secret Service), not in plaintext.
phoenix # first run triggers setup wizard
phoenix --setup # re-run setup anytime
Optional extras:
uv pip install "phoenix-os[embeddings]" # FastEmbed for memory (recommended)
uv pip install "phoenix-os[voice]" # Wake word + STT + TTS
uv pip install "phoenix-os[browser]" # Web automation
uv pip install "phoenix-os[pdf]" # PDF text extraction
uv pip install "phoenix-os[all]" # Everything
Quick Start
# Set your model and API key
export PHOENIX_MODEL=anthropic:claude-sonnet-4-20250514
export ANTHROPIC_API_KEY=sk-...
# Run
phoenix
Supported Models
Phoenix works with any model provider supported by pydantic-ai:
| Provider | Example | Env Variable |
|---|---|---|
| Anthropic | anthropic:claude-sonnet-4-20250514 |
ANTHROPIC_API_KEY |
| OpenAI | openai:gpt-4o |
OPENAI_API_KEY |
| Google Gemini | google-gla:gemini-2.0-flash |
GEMINI_API_KEY |
| Groq | groq:llama-3.3-70b-versatile |
GROQ_API_KEY |
| Mistral | mistral:mistral-large-latest |
MISTRAL_API_KEY |
| DeepSeek | deepseek:deepseek-chat |
DEEPSEEK_API_KEY |
| Ollama (local, free) | ollama:llama3:8b |
None needed |
| OpenRouter | openrouter:... |
OPENROUTER_API_KEY |
phoenix --model anthropic:claude-sonnet-4-20250514
phoenix --model ollama:llama3:8b # local, free
phoenix --model groq:llama-3.3-70b-versatile # fast inference
Usage
phoenix # Start new session
phoenix --continue # Resume last session
phoenix --resume SESSION_ID # Resume specific session
phoenix --sessions # List saved sessions
phoenix --model openai:gpt-4o # Override model
phoenix --auto # Auto-approve file edits
phoenix --yolo # Auto-approve everything
phoenix --voice # Enable voice I/O
Shell Commands
| Command | Description |
|---|---|
/help |
Show all commands |
/tools |
List available abilities |
/agents |
List available agents |
/memory |
Show memory stats |
/model |
Show or switch model |
/mode |
Show or switch approval mode (safe/auto/yolo) |
/cost |
Show token usage |
/sessions |
List saved sessions |
/clear |
Clear conversation history |
/bye |
Exit |
Architecture
Memory is the nervous system. Every other module exists to support it.
phoenix/
core/ # Engine -- builds agents, routes delegation, discovers plugins
memory/ # What Phoenix remembers -- cognitive pipeline (the spine)
abilities/ # What Phoenix can do -- each file is one pluggable capability
personality/ # Who Phoenix is -- YAML agent definitions + prompt generation
ambient/ # What Phoenix notices -- background intelligence, token-budgeted
voice/ # How Phoenix speaks/listens -- wake word, STT, TTS (optional)
shell/ # How users interact -- CLI loop, commands, sessions
hooks/ # How Phoenix extends -- lifecycle event hooks
config/ # How Phoenix is configured -- env, models, paths
Adding an Ability
Drop a Python file with the @ability decorator. No registration. No manifest. No plugin SDK:
# abilities/weather.py
from phoenix.abilities import ability
@ability(name="weather", description="Get current weather")
async def weather(city: str) -> str:
return f"Weather in {city}: sunny, 22C"
Add weather to any agent's YAML abilities list. Done. Phoenix auto-discovers it at startup.
Three tiers:
- Simple -- function with
@ability. No Phoenix knowledge needed. - With approval -- same function. Interceptor handles permissions externally.
- With context -- add
ctx: PhoenixContextfor memory, config, logging.
from phoenix.abilities import ability, PhoenixContext
@ability(name="smart_search", description="Search informed by memory")
async def smart_search(ctx: PhoenixContext, query: str) -> str:
past = ctx.memory.pre_turn(query) if ctx.memory else ""
return f"Results for: {query} (context: {past[:100]})"
Adding an Agent
Drop a YAML file. No Python changes:
# personality/agents/researcher.yaml
name: researcher
role: "Deep research and analysis"
tone:
- thorough
- analytical
rules:
- "Always cite sources"
- "Cross-reference multiple sources"
abilities:
- read_file
- grep
- bash
meta:
category: community
Phoenix picks it up at next startup.
Memory: Cognition, Not Storage
Most "AI with memory" products are storage with retrieval: save fact, find fact on query match. That is retrieval-augmented generation with extra steps. Anyone can build it in a weekend.
Phoenix is different in kind, not degree:
- Extract -- equation-based fact extraction from user input. Declarative signal, semantic scoring, intent detection. Zero LLM calls.
- Gate -- three-stage novelty filtering:
- Duplicate detection
- Contradiction detection (flag, supersede, or merge)
- Output-aware gating (refuses to store the AI's own words as user facts)
- Store -- SQLite with graph edges connecting related memories
- Recall -- semantic similarity + memory strength + recency + spreading activation through the graph
- Feedback -- memories the AI actually uses get stronger. Unused memories decay. Your brain does this. mem0 does not.
- Consolidate -- clusters similar memories, adjusts weights, tracks emotional/contextual dimensions
This is not a memory framework you bolt on. It is the architecture the rest of Phoenix is built around.
Ambient Intelligence
A background daemon monitors system state (battery, disk, session length, time-of-day, idle) and nudges when genuinely useful. Zero tokens spent until something is worth saying. Token-budgeted (5000/day with emergency reserve). An LLM judge decides whether silence would be worse than noise before Phoenix speaks.
No other agent has this layer. Most send notifications. Phoenix notices.
Voice
Optional. Say "Phoenix" to activate, speak naturally, get spoken responses. Supports Groq Whisper (cloud STT), Kokoro (local TTS), with macOS say as fallback.
pip install phoenix-os[voice]
phoenix --voice
Configuration
Environment variables:
| Variable | Default | Description |
|---|---|---|
PHOENIX_MODEL |
anthropic:claude-sonnet-4-20250514 |
Model to use |
PHOENIX_DIR |
~/.phoenix |
Data directory |
PHOENIX_LOG_LEVEL |
WARNING |
Logging level |
ANTHROPIC_API_KEY |
-- | Anthropic API key |
OPENAI_API_KEY |
-- | OpenAI API key |
GROQ_API_KEY |
-- | Groq API key (for voice STT) |
PICOVOICE_ACCESS_KEY |
-- | Picovoice key (for wake word) |
Hooks
Create hooks_config.json in your project root for lifecycle event hooks:
{
"hooks": {
"pre_turn": [
{"command": "echo 'User said something'", "action": "log"}
],
"pre_tool": [
{"match": "bash", "command": "./approve.sh", "action": "approve_or_deny"}
]
}
}
Events: pre_tool, post_tool, pre_turn, post_turn, on_error.
Philosophy
Phoenix is built on two founding documents. Read them before using, contributing, or forking:
principle.md-- What Phoenix believes. How decisions get made. The discipline.vision.md-- Where Phoenix is going over ten years. The horizon.
One-line principle: Phoenix competes on the ride, not the specs. Taste, coherence, and a singular editorial voice are the only defensible position against Anthropic and OpenAI's infinite engineering budgets.
One-line vision: Phoenix is the AI that will know you in ten years, because one person spent ten years refusing to make it anything else.
Contributing
Phoenix is open-source. That does not mean democratic.
Before opening a PR:
- Read
principle.md. If your contribution would fail the "would a committee pick this?" test, do not open the PR. - Read
vision.md. If your contribution would pull Phoenix away from the horizon, do not open the PR. - Issues, discussions, and bug reports are always welcome.
- New abilities and agents are welcome when they fit the taste of the project.
Generic contributions are rejected, regardless of code quality. This is deliberate. The discipline is the product.
License
Apache 2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file phoenix_os-0.2.1.tar.gz.
File metadata
- Download URL: phoenix_os-0.2.1.tar.gz
- Upload date:
- Size: 112.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e1c4c2c4c527466ff662d34c1fc45951966976c5031e13c4501b59789d7ec8e
|
|
| MD5 |
d9c75241e35f4208231dee2e97961234
|
|
| BLAKE2b-256 |
a6454ba12264fbfa06fcf02421c1c20bfbd1059e4ba73ce6255efbf23cd69162
|
Provenance
The following attestation bundles were made for phoenix_os-0.2.1.tar.gz:
Publisher:
publish.yml on harshalmore31/phoenix-os
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
phoenix_os-0.2.1.tar.gz -
Subject digest:
5e1c4c2c4c527466ff662d34c1fc45951966976c5031e13c4501b59789d7ec8e - Sigstore transparency entry: 1280605412
- Sigstore integration time:
-
Permalink:
harshalmore31/phoenix-os@f3450a2fd86ed0f926a2040ecdb876f6d796af82 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/harshalmore31
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@f3450a2fd86ed0f926a2040ecdb876f6d796af82 -
Trigger Event:
release
-
Statement type:
File details
Details for the file phoenix_os-0.2.1-py3-none-any.whl.
File metadata
- Download URL: phoenix_os-0.2.1-py3-none-any.whl
- Upload date:
- Size: 122.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
16ae08249de9e5ba9db1e21b6a00a6366aeecf89367feb7117df5f7cb7dc6d2f
|
|
| MD5 |
8acd3b93ec156db667047ed1b766ecb2
|
|
| BLAKE2b-256 |
b70fb03bfb3d447164c0c93ef7b0b801183695696dd36a701e96e5ab246ef2c2
|
Provenance
The following attestation bundles were made for phoenix_os-0.2.1-py3-none-any.whl:
Publisher:
publish.yml on harshalmore31/phoenix-os
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
phoenix_os-0.2.1-py3-none-any.whl -
Subject digest:
16ae08249de9e5ba9db1e21b6a00a6366aeecf89367feb7117df5f7cb7dc6d2f - Sigstore transparency entry: 1280605416
- Sigstore integration time:
-
Permalink:
harshalmore31/phoenix-os@f3450a2fd86ed0f926a2040ecdb876f6d796af82 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/harshalmore31
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@f3450a2fd86ed0f926a2040ecdb876f6d796af82 -
Trigger Event:
release
-
Statement type: