Skip to main content

An AI that feels like it knows you -- cognitive architecture with memory as the nervous system.

Project description

Phoenix

CI PyPI Python License

An AI that feels like it knows you.

Not a chatbot. Not a memory framework. Not a coding agent. Phoenix is a cognitive architecture where memory is the nervous system -- the AI that actually remembers who you are, notices when you contradict yourself, and picks up where you left off.

Read principle.md and vision.md before using, contributing, or forking. They are the founding documents. They are not optional.


What Phoenix Does That Other Agents Do Not

Five behaviors that no other AI agent ships reliably today:

  • Resumption. Quit mid-task. Reopen days later. Phoenix picks up exactly where things were -- because it understood what was happening, not just recorded it.
  • Correction. Phoenix notices when you contradict yourself and surfaces it respectfully, because the memory gate checks new facts against old ones before storing.
  • Ambience. Phoenix speaks up at the right moment without being asked, within a token budget. Not spammy notifications -- genuine noticing.
  • Recognition. Phoenix says "you mentioned X" (recognition) instead of "I found a relevant memory" (retrieval). Small phrasing distinction. Massive experiential difference.
  • Consolidation. Over weeks, Phoenix notices patterns you did not -- that you ship more on Tuesdays, that you get stuck in the same refactor loop -- and brings them up when useful.

These are not features. They are behaviors that emerge from the architecture. No amount of wrapping OpenAI in a prompt will produce them.


What Phoenix Is Not

  • Not a memory framework. mem0, Letta, Zep, LangChain memory are storage with retrieval. Phoenix is cognitive architecture.
  • Not a Swiss Army knife. If you need 25+ messaging channels and 50+ LLM providers, use OpenClaw. Phoenix is a scalpel.
  • Not a Claude Code clone. Claude Code is a coding agent. Phoenix is a personal agent that happens to be good at coding.
  • Not design-by-committee. Contributions welcome. Generic contributions rejected.

Install

# Recommended: install with uv (fast)
uv pip install phoenix-os

# Or with pip
pip install phoenix-os

From source:

git clone https://github.com/harshalmore31/phoenix-os.git
cd phoenix-os
uv pip install -e .   # or: pip install -e .

On first run, the setup wizard guides you through model selection and API key setup. Keys are stored in your OS keychain (macOS Keychain / Linux Secret Service), not in plaintext.

phoenix              # first run triggers setup wizard
phoenix --setup      # re-run setup anytime

Optional extras:

uv pip install "phoenix-os[embeddings]"           # EmbeddingGemma via sentence-transformers (recommended, multilingual, ~600MB)
uv pip install "phoenix-os[embeddings-fastembed]" # FastEmbed + MiniLM fallback (English only, ~90MB)
uv pip install "phoenix-os[voice]"       # Wake word + STT + TTS
uv pip install "phoenix-os[browser]"     # Web automation
uv pip install "phoenix-os[pdf]"         # PDF text extraction
uv pip install "phoenix-os[all]"         # Everything

Quick Start

# Set your model and API key
export PHOENIX_MODEL=anthropic:claude-sonnet-4-20250514
export ANTHROPIC_API_KEY=sk-...

# Run
phoenix

Supported Models

Phoenix works with any model provider supported by pydantic-ai:

Provider Example Env Variable
Anthropic anthropic:claude-sonnet-4-20250514 ANTHROPIC_API_KEY
OpenAI openai:gpt-4o OPENAI_API_KEY
Google Gemini google-gla:gemini-2.0-flash GEMINI_API_KEY
Groq groq:llama-3.3-70b-versatile GROQ_API_KEY
Mistral mistral:mistral-large-latest MISTRAL_API_KEY
DeepSeek deepseek:deepseek-chat DEEPSEEK_API_KEY
Ollama (local, free) ollama:llama3:8b None needed
OpenRouter openrouter:... OPENROUTER_API_KEY
phoenix --model anthropic:claude-sonnet-4-20250514
phoenix --model ollama:llama3:8b         # local, free
phoenix --model groq:llama-3.3-70b-versatile   # fast inference

Usage

phoenix                     # Start new session
phoenix --continue          # Resume last session
phoenix --resume SESSION_ID # Resume specific session
phoenix --sessions          # List saved sessions
phoenix --model openai:gpt-4o  # Override model
phoenix --auto              # Auto-approve file edits
phoenix --yolo              # Auto-approve everything
phoenix --voice             # Enable voice I/O

Shell Commands

Command Description
/help Show all commands
/tools List available abilities
/agents List available agents
/memory Show memory stats
/model Show or switch model
/mode Show or switch approval mode (safe/auto/yolo)
/cost Show token usage
/sessions List saved sessions
/clear Clear conversation history
/bye Exit

Architecture

Memory is the nervous system. Every other module exists to support it.

phoenix/
    core/           # Engine -- builds agents, routes delegation, discovers plugins
    memory/         # What Phoenix remembers -- cognitive pipeline (the spine)
    abilities/      # What Phoenix can do -- each file is one pluggable capability
    personality/    # Who Phoenix is -- YAML agent definitions + prompt generation
    ambient/        # What Phoenix notices -- background intelligence, token-budgeted
    voice/          # How Phoenix speaks/listens -- wake word, STT, TTS (optional)
    shell/          # How users interact -- CLI loop, commands, sessions
    hooks/          # How Phoenix extends -- lifecycle event hooks
    config/         # How Phoenix is configured -- env, models, paths

Adding an Ability

Drop a Python file with the @ability decorator. No registration. No manifest. No plugin SDK:

# abilities/weather.py
from phoenix.abilities import ability

@ability(name="weather", description="Get current weather")
async def weather(city: str) -> str:
    return f"Weather in {city}: sunny, 22C"

Add weather to any agent's YAML abilities list. Done. Phoenix auto-discovers it at startup.

Three tiers:

  1. Simple -- function with @ability. No Phoenix knowledge needed.
  2. With approval -- same function. Interceptor handles permissions externally.
  3. With context -- add ctx: PhoenixContext for memory, config, logging.
from phoenix.abilities import ability, PhoenixContext

@ability(name="smart_search", description="Search informed by memory")
async def smart_search(ctx: PhoenixContext, query: str) -> str:
    past = ctx.memory.pre_turn(query) if ctx.memory else ""
    return f"Results for: {query} (context: {past[:100]})"

Adding an Agent

Drop a YAML file. No Python changes:

# personality/agents/researcher.yaml
name: researcher
role: "Deep research and analysis"
tone:
  - thorough
  - analytical

rules:
  - "Always cite sources"
  - "Cross-reference multiple sources"

abilities:
  - read_file
  - grep
  - bash

meta:
  category: community

Phoenix picks it up at next startup.


Memory: Cognition, Not Storage

Most "AI with memory" products are storage with retrieval: save fact, find fact on query match. That is retrieval-augmented generation with extra steps. Anyone can build it in a weekend.

Phoenix is different in kind, not degree:

  1. Extract -- equation-based fact extraction from user input. Declarative signal, semantic scoring, intent detection. Zero LLM calls.
  2. Gate -- three-stage novelty filtering:
    • Duplicate detection
    • Contradiction detection (flag, supersede, or merge)
    • Output-aware gating (refuses to store the AI's own words as user facts)
  3. Store -- SQLite with graph edges connecting related memories
  4. Recall -- semantic similarity + memory strength + recency + spreading activation through the graph
  5. Feedback -- memories the AI actually uses get stronger. Unused memories decay. Your brain does this. mem0 does not.
  6. Consolidate -- clusters similar memories, adjusts weights, tracks emotional/contextual dimensions

Task-typed embeddings via EmbeddingGemma-300M (768-dim, multilingual, 100+ languages): queries, documents, clusters, and symmetric comparisons each use their correct prompt prefix. FastEmbed + MiniLM remains as a smaller English-only fallback, selectable at phoenix --setup.

The recall thresholds are learnable. The learning/ module runs Bayesian optimization over your own memory corpus with an LLM-as-judge scoring retrieval quality per trial. You do not tune thresholds by hand -- your memory literally teaches the retrieval layer how to recall itself. See learning/README.md.

This is not a memory framework you bolt on. It is the architecture the rest of Phoenix is built around.


Ambient Intelligence

A background daemon monitors system state (battery, disk, session length, time-of-day, idle) and nudges when genuinely useful. Zero tokens spent until something is worth saying. Token-budgeted (5000/day with emergency reserve). An LLM judge decides whether silence would be worse than noise before Phoenix speaks.

No other agent has this layer. Most send notifications. Phoenix notices.


Voice

Optional. Say "Phoenix" to activate, speak naturally, get spoken responses. Supports Groq Whisper (cloud STT), Kokoro (local TTS), with macOS say as fallback.

pip install phoenix-os[voice]
phoenix --voice

Configuration

Environment variables:

Variable Default Description
PHOENIX_MODEL anthropic:claude-sonnet-4-20250514 Model to use
PHOENIX_DIR ~/.phoenix Data directory
PHOENIX_LOG_LEVEL WARNING Logging level
ANTHROPIC_API_KEY -- Anthropic API key
OPENAI_API_KEY -- OpenAI API key
GROQ_API_KEY -- Groq API key (for voice STT)
PICOVOICE_ACCESS_KEY -- Picovoice key (for wake word)

Hooks

Create hooks_config.json in your project root for lifecycle event hooks:

{
  "hooks": {
    "pre_turn": [
      {"command": "echo 'User said something'", "action": "log"}
    ],
    "pre_tool": [
      {"match": "bash", "command": "./approve.sh", "action": "approve_or_deny"}
    ]
  }
}

Events: pre_tool, post_tool, pre_turn, post_turn, on_error.


Philosophy

Phoenix is built on two founding documents. Read them before using, contributing, or forking:

  • principle.md -- What Phoenix believes. How decisions get made. The discipline.
  • vision.md -- Where Phoenix is going over ten years. The horizon.

One-line principle: Phoenix competes on the ride, not the specs. Taste, coherence, and a singular editorial voice are the only defensible position against Anthropic and OpenAI's infinite engineering budgets.

One-line vision: Phoenix is the AI that will know you in ten years, because one person spent ten years refusing to make it anything else.


Contributing

Phoenix is open-source. That does not mean democratic.

Before opening a PR:

  1. Read principle.md. If your contribution would fail the "would a committee pick this?" test, do not open the PR.
  2. Read vision.md. If your contribution would pull Phoenix away from the horizon, do not open the PR.
  3. Issues, discussions, and bug reports are always welcome.
  4. New abilities and agents are welcome when they fit the taste of the project.

Generic contributions are rejected, regardless of code quality. This is deliberate. The discipline is the product.


License

Apache 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

phoenix_os-0.3.6.tar.gz (150.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

phoenix_os-0.3.6-py3-none-any.whl (164.9 kB view details)

Uploaded Python 3

File details

Details for the file phoenix_os-0.3.6.tar.gz.

File metadata

  • Download URL: phoenix_os-0.3.6.tar.gz
  • Upload date:
  • Size: 150.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for phoenix_os-0.3.6.tar.gz
Algorithm Hash digest
SHA256 b7bf735d073e1b463275813c39f032acd9cf02a29539ca4896dd529692220a0a
MD5 4c6c72b16eeaaa80e1e16e4fc87eff33
BLAKE2b-256 605a4cc05af3f3d7c5b87a863f447fca48c65bd633dd69318ede8ee6a0597ec6

See more details on using hashes here.

Provenance

The following attestation bundles were made for phoenix_os-0.3.6.tar.gz:

Publisher: publish.yml on harshalmore31/phoenix-os

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file phoenix_os-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: phoenix_os-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 164.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for phoenix_os-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 673606c7d719e4b16b8131830faf1d3cf59db22c9e4ba85f9935b1f05b9eb713
MD5 29c0b66a8a543249dbf14e522c8788dd
BLAKE2b-256 29cae03b22c4f637184f2e040ee15f73c2d7cccfbf108f917f8eb196a1ccdce5

See more details on using hashes here.

Provenance

The following attestation bundles were made for phoenix_os-0.3.6-py3-none-any.whl:

Publisher: publish.yml on harshalmore31/phoenix-os

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page