Skip to main content

SUNWÆE CLI - The Almost Everything CLI.

Project description

Quick Commands

pip install -e .
sunwaee gen                    # Interactive AI chat
sunwaee notes create ...       # CRUD on notes
sunwaee tasks create ...       # CRUD on tasks

Testing

pytest                         # All unit/integration tests (mocked, no API keys needed)
pytest -m live                 # Live tests against real provider APIs (API keys required)
pytest -m "not live"           # Explicit unit-only run (same as plain pytest)

Unit tests (tests/) use mocked HTTP clients — no API keys needed, safe for CI.

Live tests (tests/gen/engine/live/) call real provider APIs with the cheapest model per provider and assert that every Response field is correctly populated — including cost, usage, and performance. Results are saved to tests/gen/engine/live/run/ as JSON for inspection.

API keys are read from environment variables: ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, XAI_API_KEY. A test is skipped (not failed) when its key is absent.

Directory Structure

cli/sunwaee/
├── cli.py                        # Entry point; auto-discovers modules via Typer
├── config.py                     # TOML config management + env var overrides
├── core/
│   ├── audit.py                  # Append-only JSONL audit trail
│   ├── editor.py                 # $EDITOR integration
│   ├── fs.py                     # Markdown + YAML frontmatter I/O
│   ├── git.py                    # Workspace git init + auto-commit on mutations
│   ├── logger.py                 # Structured logging
│   ├── output.py                 # JSON/rich output formatter (human vs api/sun mode)
│   └── tools.py                  # @tool() decorator + ok()/err() result helpers
└── modules/
    ├── workspaces/               # Workspace CRUD
    ├── projects/                 # Project grouping within workspaces
    ├── notes/                    # Note CRUD + full-text search
    ├── tasks/                    # Task CRUD + subtasks + filtering
    ├── logs/                     # Audit log browsing
    ├── shell/                    # Shell command execution tool (for AI agent)
    └── gen/                      # AI agent (Sun)
        ├── agent.py              # ReAct loop implementation
        ├── session.py            # Session persistence (JSONL)
        ├── tools.py              # Aggregates all module tools for the agent
        ├── commands/             # CLI subcommands (gen, models, sessions, set-model)
        ├── engine/               # ← LLM abstraction layer (also used by api/)
        │   ├── base.py
        │   ├── factory.py
        │   ├── types.py
        │   ├── model.py
        │   ├── models/           # Model definitions per provider
        │   └── providers/        # AnthropicEngine, OpenAIEngine, GoogleEngine
        └── repl/                 # Interactive REPL (Rich + prompt_toolkit)
            ├── loop.py
            ├── input.py          # Tab completion
            └── display.py        # Formatted output

Commands

sunwaee init
sunwaee workspaces {create,list,read,update,delete}
sunwaee projects {create,list,read,update,delete}
sunwaee notes {create,list,read,update,delete}
sunwaee tasks {create,list,read,update,delete}
sunwaee logs {list,read}
sunwaee gen [--session ID] [--model MODEL]
sunwaee gen models list
sunwaee gen sessions list
sunwaee gen set-model

Module auto-discovery: cli.py scans modules/ for directories with app = typer.Typer() and registers them automatically — no changes needed to the core to add a module.

Data Storage

All data is file-based (no HTTP calls to api/ for CRUD):

  • Content: ~/sunwaee/workspaces/{workspace}/{module}/{slug}.md (Markdown + YAML frontmatter)
  • Sessions: ~/sunwaee/workspaces/{workspace}/gen/sessions/{id}.jsonl (append-only)
  • Audit logs: ~/sunwaee/workspaces/{workspace}/logs/{year}/{month}/{day}/logs.jsonl
  • Config: ~/.sunwaee/config.toml
  • Each workspace is a git repository; every mutation is auto-committed

Output Modes

Controlled by SUNWAEE_CALLER env var:

  • human: Rich colored terminal output (default)
  • api or sun: JSON responses

JSON format:

{"ok": true, "data": {...}}
{"ok": false, "error": "message", "code": "NOT_FOUND"}

Error codes: NOT_FOUND, ALREADY_EXISTS, VALIDATION_ERROR, IO_ERROR, CONFIRMATION_REQUIRED

Module Pattern

Each module follows this structure:

# model.py
@dataclass
class Note:
    id: str = field(default_factory=_uuid)
    title: str = ""
    body: str = ""
    created_at: str = field(default_factory=_now)
    updated_at: str = field(default_factory=_now)

# tools.py — exposed to AI agent
@tool("Create, list, read, update or delete notes")
def notes(action: Literal["create","list","read","update","delete"], title: str = "", ...) -> str:
    # Uses core/fs, core/git, core/audit
    return ok(data) or err(message)

AI Agent ReAct Loop (modules/gen/agent.py)

async def stream_run(messages, tools, engine) -> AsyncIterator[Response]:
    for _ in range(10):
        async for response in engine.stream(messages, tools):
            yield response

        if response.stop_reason != StopReason.TOOL_USE or not response.tool_calls:
            break

        # Execute all tool calls concurrently
        results = await asyncio.gather(*[tc.fn(**tc.arguments) for tc in response.tool_calls])
        # Append tool results to messages, continue loop

Tools available to the agent: notes, tasks, workspaces, projects, shell.


LLM Engine Layer (cli/sunwaee/modules/gen/engine/)

This code is installed as the sunwaee PyPI package. Both api/ and cli/ import it via:

from sunwaee.modules.gen.engine import get_engine
from sunwaee.modules.gen.engine.types import Message, Tool, Response, ...

Types (engine/types.py)

class Role(Enum): SYSTEM, USER, ASSISTANT, TOOL
class StopReason(Enum): END_TURN, TOOL_USE, MAX_TOKENS

@dataclass
class Message:
    role: Role
    content: str | None = None
    reasoning_content: str | None = None     # Extended thinking / chain of thought
    reasoning_signature: str | None = None   # Anthropic: signature; Google: thoughtSignature
    tool_call_id: str | None = None          # For TOOL role messages
    tool_calls: list[ToolCall] | None = None # For ASSISTANT messages

@dataclass
class Tool:
    name: str
    description: str
    parameters: dict         # JSON Schema
    fn: Callable | None = None

@dataclass
class ToolCall:
    id: str
    name: str
    arguments: dict
    error: str | None = None
    duration: float = 0.0
    results: list[dict] | None = None

@dataclass
class Response:
    provider: str
    model: str
    streaming: bool = False
    synthetic: bool = False      # Sentinel messages (e.g., "reasoning in progress" for OpenAI)
    content: str | None = None
    reasoning_content: str | None = None
    reasoning_signature: str | None = None
    tool_calls: list[ToolCall] | None = None
    stop_reason: StopReason | None = None
    error: Error | None = None
    cost: Cost | None = None
    performance: Performance | None = None
    usage: Usage | None = None

@dataclass
class Usage:
    input_tokens: int = 0
    output_tokens: int = 0
    total_tokens: int = 0
    cache_read_tokens: int = 0
    cache_write_tokens: int = 0

@dataclass
class Cost:
    input: float = 0.0; output: float = 0.0
    cache_read: float = 0.0; cache_write: float = 0.0
    total: float = 0.0

@dataclass
class Performance:
    latency: float = 0.0            # Time to first chunk
    reasoning_duration: float = 0.0
    content_duration: float = 0.0
    total_duration: float = 0.0
    throughput: int = 0             # tokens/second

Factory (engine/factory.py)

def get_engine(
    provider: str,
    model: str,
    api_key: str | None = None,        # Falls back to {PROVIDER}_API_KEY env var
    max_tokens: int = 8192,
    thinking_budget: int | None = None, # For Anthropic/Google extended thinking
    reasoning_effort: str | None = None # For OpenAI: "low" | "medium" | "high"
) -> BaseEngine:

Provider routing:

  • "anthropic"AnthropicEngine (api.anthropic.com)
  • "openai"OpenAIEngine (api.openai.com/v1)
  • "deepseek"OpenAIEngine (api.deepseek.com/v1)
  • "moonshot"OpenAIEngine (api.moonshot.ai/v1)
  • "xai"OpenAIEngine (api.x.ai/v1)
  • "google"GoogleEngine

Engine Interface

class BaseEngine(ABC):
    @abstractmethod
    async def chat(self, messages: list[Message], tools: list[Tool] | None = None) -> Response: ...

    @abstractmethod
    async def stream(self, messages: list[Message], tools: list[Tool] | None = None) -> AsyncIterator[Response]: ...

Each chunk yielded during stream() carries incremental content or reasoning_content. The final chunk carries accumulated usage, cost, performance, and stop_reason.

Cost is computed automatically inside each engine (via compute_cost from engine/model.py) whenever usage is available. Both chat() and the final streaming chunk always set response.cost when the model is listed in the local registry.

Provider-specific notes:

  • Anthropic: system prompt passed separately; supports extended thinking (thinking_budget); reasoning_signature in response
  • OpenAI: system prompt included in messages; supports reasoning_effort; emits synthetic "reasoning in progress" Response for reasoning models
  • Google: system prompt in systemInstruction; thinking via thinkingConfig; no tool call IDs (uses function name as ID)
  • DeepSeek / Moonshot / xAI: use OpenAI-compatible endpoint via OpenAIEngine with custom base URL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sunwaee-1.1.3.tar.gz (116.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sunwaee-1.1.3-py3-none-any.whl (103.8 kB view details)

Uploaded Python 3

File details

Details for the file sunwaee-1.1.3.tar.gz.

File metadata

  • Download URL: sunwaee-1.1.3.tar.gz
  • Upload date:
  • Size: 116.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for sunwaee-1.1.3.tar.gz
Algorithm Hash digest
SHA256 e595ff4da6ceb2a9c1b7a4c3d54da1620676b089a217b40ce89090fa36c51ffb
MD5 b0ae16985a9ff1810e2afd7e4ef01555
BLAKE2b-256 b6716ab16a415ac9452aadf3188f542349be6799490d7cf506b20d7641309b23

See more details on using hashes here.

File details

Details for the file sunwaee-1.1.3-py3-none-any.whl.

File metadata

  • Download URL: sunwaee-1.1.3-py3-none-any.whl
  • Upload date:
  • Size: 103.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for sunwaee-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f7314a3883f3bc78c527fe75b4da494bf4888da2d64956ba0657e8427b0789ef
MD5 0721ac73923b8a7b46e3cff549e994b3
BLAKE2b-256 5aef4e642529cce9b4b45318f1497d9ca6f431056838e0bf471872f60154f571

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page