SUNWÆE CLI - The Almost Everything CLI.
Project description
Quick Commands
pip install -e .
sunwaee gen # Interactive AI chat
sunwaee notes create ... # CRUD on notes
sunwaee tasks create ... # CRUD on tasks
Testing
pytest # All unit/integration tests (mocked, no API keys needed)
pytest -m live # Live tests against real provider APIs (API keys required)
pytest -m "not live" # Explicit unit-only run (same as plain pytest)
Unit tests (tests/) use mocked HTTP clients — no API keys needed, safe for CI.
Live tests (tests/gen/engine/live/) call real provider APIs with the cheapest model per provider and assert that every Response field is correctly populated — including cost, usage, and performance. Results are saved to tests/gen/engine/live/run/ as JSON for inspection.
API keys are read from environment variables: ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, XAI_API_KEY. A test is skipped (not failed) when its key is absent.
Directory Structure
cli/sunwaee/
├── cli.py # Entry point; auto-discovers modules via Typer
├── config.py # TOML config management + env var overrides
├── core/
│ ├── audit.py # Append-only JSONL audit trail
│ ├── editor.py # $EDITOR integration
│ ├── fs.py # Markdown + YAML frontmatter I/O
│ ├── git.py # Workspace git init + auto-commit on mutations
│ ├── logger.py # Structured logging
│ ├── output.py # JSON/rich output formatter (human vs api/sun mode)
│ └── tools.py # @tool() decorator + ok()/err() result helpers
└── modules/
├── workspaces/ # Workspace CRUD
├── projects/ # Project grouping within workspaces
├── notes/ # Note CRUD + full-text search
├── tasks/ # Task CRUD + subtasks + filtering
├── logs/ # Audit log browsing
├── shell/ # Shell command execution tool (for AI agent)
└── gen/ # AI agent (Sun)
├── agent.py # ReAct loop implementation
├── session.py # Session persistence (JSONL)
├── tools.py # Aggregates all module tools for the agent
├── commands/ # CLI subcommands (gen, models, sessions, set-model)
├── engine/ # ← LLM abstraction layer (also used by api/)
│ ├── base.py
│ ├── factory.py
│ ├── types.py
│ ├── model.py
│ ├── models/ # Model definitions per provider
│ └── providers/ # AnthropicEngine, OpenAIEngine, GoogleEngine
└── repl/ # Interactive REPL (Rich + prompt_toolkit)
├── loop.py
├── input.py # Tab completion
└── display.py # Formatted output
Commands
sunwaee init
sunwaee workspaces {create,list,read,update,delete}
sunwaee projects {create,list,read,update,delete}
sunwaee notes {create,list,read,update,delete}
sunwaee tasks {create,list,read,update,delete}
sunwaee logs {list,read}
sunwaee gen [--session ID] [--model MODEL]
sunwaee gen models list
sunwaee gen sessions list
sunwaee gen set-model
Module auto-discovery: cli.py scans modules/ for directories with app = typer.Typer() and registers them automatically — no changes needed to the core to add a module.
Data Storage
All data is file-based (no HTTP calls to api/ for CRUD):
- Content:
~/sunwaee/workspaces/{workspace}/{module}/{slug}.md(Markdown + YAML frontmatter) - Sessions:
~/sunwaee/workspaces/{workspace}/gen/sessions/{id}.jsonl(append-only) - Audit logs:
~/sunwaee/workspaces/{workspace}/logs/{year}/{month}/{day}/logs.jsonl - Config:
~/.sunwaee/config.toml - Each workspace is a git repository; every mutation is auto-committed
Output Modes
Controlled by SUNWAEE_CALLER env var:
human: Rich colored terminal output (default)apiorsun: JSON responses
JSON format:
{"ok": true, "data": {...}}
{"ok": false, "error": "message", "code": "NOT_FOUND"}
Error codes: NOT_FOUND, ALREADY_EXISTS, VALIDATION_ERROR, IO_ERROR, CONFIRMATION_REQUIRED
Module Pattern
Each module follows this structure:
# model.py
@dataclass
class Note:
id: str = field(default_factory=_uuid)
title: str = ""
body: str = ""
created_at: str = field(default_factory=_now)
updated_at: str = field(default_factory=_now)
# tools.py — exposed to AI agent
@tool("Create, list, read, update or delete notes")
def notes(action: Literal["create","list","read","update","delete"], title: str = "", ...) -> str:
# Uses core/fs, core/git, core/audit
return ok(data) or err(message)
AI Agent ReAct Loop (modules/gen/agent.py)
async def stream_run(messages, tools, engine) -> AsyncIterator[Response]:
for _ in range(10):
async for response in engine.stream(messages, tools):
yield response
if response.stop_reason != StopReason.TOOL_USE or not response.tool_calls:
break
# Execute all tool calls concurrently
results = await asyncio.gather(*[tc.fn(**tc.arguments) for tc in response.tool_calls])
# Append tool results to messages, continue loop
Tools available to the agent: notes, tasks, workspaces, projects, shell.
LLM Engine Layer (cli/sunwaee/modules/gen/engine/)
This code is installed as the sunwaee PyPI package. Both api/ and cli/ import it via:
from sunwaee.modules.gen.engine import get_engine
from sunwaee.modules.gen.engine.types import Message, Tool, Response, ...
Types (engine/types.py)
class Role(Enum): SYSTEM, USER, ASSISTANT, TOOL
class StopReason(Enum): END_TURN, TOOL_USE, MAX_TOKENS
@dataclass
class Message:
role: Role
content: str | None = None
reasoning_content: str | None = None # Extended thinking / chain of thought
reasoning_signature: str | None = None # Anthropic: signature; Google: thoughtSignature
tool_call_id: str | None = None # For TOOL role messages
tool_calls: list[ToolCall] | None = None # For ASSISTANT messages
@dataclass
class Tool:
name: str
description: str
parameters: dict # JSON Schema
fn: Callable | None = None
@dataclass
class ToolCall:
id: str
name: str
arguments: dict
error: str | None = None
duration: float = 0.0
results: list[dict] | None = None
@dataclass
class Response:
provider: str
model: str
streaming: bool = False
synthetic: bool = False # Sentinel messages (e.g., "reasoning in progress" for OpenAI)
content: str | None = None
reasoning_content: str | None = None
reasoning_signature: str | None = None
tool_calls: list[ToolCall] | None = None
stop_reason: StopReason | None = None
error: Error | None = None
cost: Cost | None = None
performance: Performance | None = None
usage: Usage | None = None
@dataclass
class Usage:
input_tokens: int = 0
output_tokens: int = 0
total_tokens: int = 0
cache_read_tokens: int = 0
cache_write_tokens: int = 0
@dataclass
class Cost:
input: float = 0.0; output: float = 0.0
cache_read: float = 0.0; cache_write: float = 0.0
total: float = 0.0
@dataclass
class Performance:
latency: float = 0.0 # Time to first chunk
reasoning_duration: float = 0.0
content_duration: float = 0.0
total_duration: float = 0.0
throughput: int = 0 # tokens/second
Factory (engine/factory.py)
def get_engine(
provider: str,
model: str,
api_key: str | None = None, # Falls back to {PROVIDER}_API_KEY env var
max_tokens: int = 8192,
thinking_budget: int | None = None, # For Anthropic/Google extended thinking
reasoning_effort: str | None = None # For OpenAI: "low" | "medium" | "high"
) -> BaseEngine:
Provider routing:
"anthropic"→AnthropicEngine(api.anthropic.com)"openai"→OpenAIEngine(api.openai.com/v1)"deepseek"→OpenAIEngine(api.deepseek.com/v1)"moonshot"→OpenAIEngine(api.moonshot.ai/v1)"xai"→OpenAIEngine(api.x.ai/v1)"google"→GoogleEngine
Engine Interface
class BaseEngine(ABC):
@abstractmethod
async def chat(self, messages: list[Message], tools: list[Tool] | None = None) -> Response: ...
@abstractmethod
async def stream(self, messages: list[Message], tools: list[Tool] | None = None) -> AsyncIterator[Response]: ...
Each chunk yielded during stream() carries incremental content or reasoning_content. The final chunk carries accumulated usage, cost, performance, and stop_reason.
Cost is computed automatically inside each engine (via compute_cost from engine/model.py) whenever usage is available. Both chat() and the final streaming chunk always set response.cost when the model is listed in the local registry.
Provider-specific notes:
- Anthropic: system prompt passed separately; supports extended thinking (
thinking_budget); reasoning_signature in response - OpenAI: system prompt included in messages; supports
reasoning_effort; emits synthetic "reasoning in progress"Responsefor reasoning models - Google: system prompt in
systemInstruction; thinking viathinkingConfig; no tool call IDs (uses function name as ID) - DeepSeek / Moonshot / xAI: use OpenAI-compatible endpoint via
OpenAIEnginewith custom base URL
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sunwaee-1.1.4.tar.gz.
File metadata
- Download URL: sunwaee-1.1.4.tar.gz
- Upload date:
- Size: 116.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b17a239082a5c76d6c0b8a78f3183561d039bf25603bb2f512859b20dcc8c948
|
|
| MD5 |
9ad813d69adc3a550cb67fe89e8fb129
|
|
| BLAKE2b-256 |
6a3373682580bb4a09492229a7e5c9f721c1e666d68cca84f5be6d4d4fff72ad
|
File details
Details for the file sunwaee-1.1.4-py3-none-any.whl.
File metadata
- Download URL: sunwaee-1.1.4-py3-none-any.whl
- Upload date:
- Size: 103.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9768d9b3ffa9ea80107141d3f75790cd7a40890cdf7d27ac965f8623e89fa0c6
|
|
| MD5 |
ec844be0165631252aa012f9d590daa5
|
|
| BLAKE2b-256 |
ee6abb6fa136df0a56f2ad0ddf76c9927b1a397dcf88203739d2a71cf7a4f051
|