Skip to main content

SUNWÆE CLI - The Almost Everything CLI.

Project description

Quick Commands

pip install -e .
sunwaee gen                    # Interactive AI chat
sunwaee notes create ...       # CRUD on notes
sunwaee tasks create ...       # CRUD on tasks

Testing

pytest                         # All unit/integration tests (mocked, no API keys needed)
pytest -m live                 # Live tests against real provider APIs (API keys required)
pytest -m "not live"           # Explicit unit-only run (same as plain pytest)

Unit tests (tests/) use mocked HTTP clients — no API keys needed, safe for CI.

Live tests (tests/gen/engine/live/) call real provider APIs with the cheapest model per provider and assert that every Response field is correctly populated — including cost, usage, and performance. Results are saved to tests/gen/engine/live/run/ as JSON for inspection.

API keys are read from environment variables: ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, XAI_API_KEY. A test is skipped (not failed) when its key is absent.

Directory Structure

cli/sunwaee/
├── cli.py                        # Entry point; auto-discovers modules via Typer
├── config.py                     # TOML config management + env var overrides
├── core/
│   ├── audit.py                  # Append-only JSONL audit trail
│   ├── editor.py                 # $EDITOR integration
│   ├── fs.py                     # Markdown + YAML frontmatter I/O
│   ├── git.py                    # Workspace git init + auto-commit on mutations
│   ├── logger.py                 # Structured logging
│   ├── output.py                 # JSON/rich output formatter (human vs api/sun mode)
│   └── tools.py                  # @tool() decorator + ok()/err() result helpers
└── modules/
    ├── workspaces/               # Workspace CRUD
    ├── projects/                 # Project grouping within workspaces
    ├── notes/                    # Note CRUD + full-text search
    ├── tasks/                    # Task CRUD + subtasks + filtering
    ├── logs/                     # Audit log browsing
    ├── shell/                    # Shell command execution tool (for AI agent)
    └── gen/                      # AI agent (Sun)
        ├── agent.py              # ReAct loop implementation
        ├── session.py            # Session persistence (JSONL)
        ├── tools.py              # Aggregates all module tools for the agent
        ├── commands/             # CLI subcommands (gen, models, sessions, set-model)
        ├── engine/               # ← LLM abstraction layer (also used by api/)
        │   ├── base.py
        │   ├── factory.py
        │   ├── types.py
        │   ├── model.py
        │   ├── models/           # Model definitions per provider
        │   └── providers/        # AnthropicEngine, OpenAIEngine, GoogleEngine
        └── repl/                 # Interactive REPL (Rich + prompt_toolkit)
            ├── loop.py
            ├── input.py          # Tab completion
            └── display.py        # Formatted output

Commands

sunwaee init
sunwaee workspaces {create,list,read,update,delete}
sunwaee projects {create,list,read,update,delete}
sunwaee notes {create,list,read,update,delete}
sunwaee tasks {create,list,read,update,delete}
sunwaee logs {list,read}
sunwaee gen [--session ID] [--model MODEL]
sunwaee gen models list
sunwaee gen sessions list
sunwaee gen set-model

Module auto-discovery: cli.py scans modules/ for directories with app = typer.Typer() and registers them automatically — no changes needed to the core to add a module.

Data Storage

All data is file-based (no HTTP calls to api/ for CRUD):

  • Content: ~/sunwaee/workspaces/{workspace}/{module}/{slug}.md (Markdown + YAML frontmatter)
  • Sessions: ~/sunwaee/workspaces/{workspace}/gen/sessions/{id}.jsonl (append-only)
  • Audit logs: ~/sunwaee/workspaces/{workspace}/logs/{year}/{month}/{day}/logs.jsonl
  • Config: ~/.sunwaee/config.toml
  • Each workspace is a git repository; every mutation is auto-committed

Output Modes

Controlled by SUNWAEE_CALLER env var:

  • human: Rich colored terminal output (default)
  • api or sun: JSON responses

JSON format:

{"ok": true, "data": {...}}
{"ok": false, "error": "message", "code": "NOT_FOUND"}

Error codes: NOT_FOUND, ALREADY_EXISTS, VALIDATION_ERROR, IO_ERROR, CONFIRMATION_REQUIRED

Module Pattern

Each module follows this structure:

# model.py
@dataclass
class Note:
    id: str = field(default_factory=_uuid)
    title: str = ""
    body: str = ""
    created_at: str = field(default_factory=_now)
    updated_at: str = field(default_factory=_now)

# tools.py — exposed to AI agent
@tool("Create, list, read, update or delete notes")
def notes(action: Literal["create","list","read","update","delete"], title: str = "", ...) -> str:
    # Uses core/fs, core/git, core/audit
    return ok(data) or err(message)

AI Agent ReAct Loop (modules/gen/agent.py)

async def stream_run(messages, tools, engine) -> AsyncIterator[Response]:
    for _ in range(10):
        async for response in engine.stream(messages, tools):
            yield response

        if response.stop_reason != StopReason.TOOL_USE or not response.tool_calls:
            break

        # Execute all tool calls concurrently
        results = await asyncio.gather(*[tc.fn(**tc.arguments) for tc in response.tool_calls])
        # Append tool results to messages, continue loop

Tools available to the agent: notes, tasks, workspaces, projects, shell.


LLM Engine Layer (cli/sunwaee/modules/gen/engine/)

This code is installed as the sunwaee PyPI package. Both api/ and cli/ import it via:

from sunwaee.modules.gen.engine import get_engine
from sunwaee.modules.gen.engine.types import Message, Tool, Response, ...

Types (engine/types.py)

class Role(Enum): SYSTEM, USER, ASSISTANT, TOOL
class StopReason(Enum): END_TURN, TOOL_USE, MAX_TOKENS

@dataclass
class Message:
    role: Role
    content: str | None = None
    reasoning_content: str | None = None     # Extended thinking / chain of thought
    reasoning_signature: str | None = None   # Anthropic: signature; Google: thoughtSignature
    tool_call_id: str | None = None          # For TOOL role messages
    tool_calls: list[ToolCall] | None = None # For ASSISTANT messages

@dataclass
class Tool:
    name: str
    description: str
    parameters: dict         # JSON Schema
    fn: Callable | None = None

@dataclass
class ToolCall:
    id: str
    name: str
    arguments: dict
    error: str | None = None
    duration: float = 0.0
    results: list[dict] | None = None

@dataclass
class Response:
    provider: str
    model: str
    streaming: bool = False
    synthetic: bool = False      # Sentinel messages (e.g., "reasoning in progress" for OpenAI)
    content: str | None = None
    reasoning_content: str | None = None
    reasoning_signature: str | None = None
    tool_calls: list[ToolCall] | None = None
    stop_reason: StopReason | None = None
    error: Error | None = None
    cost: Cost | None = None
    performance: Performance | None = None
    usage: Usage | None = None

@dataclass
class Usage:
    input_tokens: int = 0
    output_tokens: int = 0
    total_tokens: int = 0
    cache_read_tokens: int = 0
    cache_write_tokens: int = 0

@dataclass
class Cost:
    input: float = 0.0; output: float = 0.0
    cache_read: float = 0.0; cache_write: float = 0.0
    total: float = 0.0

@dataclass
class Performance:
    latency: float = 0.0            # Time to first chunk
    reasoning_duration: float = 0.0
    content_duration: float = 0.0
    total_duration: float = 0.0
    throughput: int = 0             # tokens/second

Factory (engine/factory.py)

def get_engine(
    provider: str,
    model: str,
    api_key: str | None = None,        # Falls back to {PROVIDER}_API_KEY env var
    max_tokens: int = 8192,
    thinking_budget: int | None = None, # For Anthropic/Google extended thinking
    reasoning_effort: str | None = None # For OpenAI: "low" | "medium" | "high"
) -> BaseEngine:

Provider routing:

  • "anthropic"AnthropicEngine (api.anthropic.com)
  • "openai"OpenAIEngine (api.openai.com/v1)
  • "deepseek"OpenAIEngine (api.deepseek.com/v1)
  • "moonshot"OpenAIEngine (api.moonshot.ai/v1)
  • "xai"OpenAIEngine (api.x.ai/v1)
  • "google"GoogleEngine

Engine Interface

class BaseEngine(ABC):
    @abstractmethod
    async def chat(self, messages: list[Message], tools: list[Tool] | None = None) -> Response: ...

    @abstractmethod
    async def stream(self, messages: list[Message], tools: list[Tool] | None = None) -> AsyncIterator[Response]: ...

Each chunk yielded during stream() carries incremental content or reasoning_content. The final chunk carries accumulated usage, cost, performance, and stop_reason.

Cost is computed automatically inside each engine (via compute_cost from engine/model.py) whenever usage is available. Both chat() and the final streaming chunk always set response.cost when the model is listed in the local registry.

Provider-specific notes:

  • Anthropic: system prompt passed separately; supports extended thinking (thinking_budget); reasoning_signature in response; prompt caching enabled automatically (system prompt cached explicitly, conversation history via top-level cache_control)
  • OpenAI: system prompt included in messages; supports reasoning_effort; emits synthetic "reasoning in progress" Response for reasoning models
  • Google: system prompt in systemInstruction; thinking via thinkingConfig; no tool call IDs (uses function name as ID)
  • DeepSeek / Moonshot / xAI: use OpenAI-compatible endpoint via OpenAIEngine with custom base URL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sunwaee-1.1.7.tar.gz (118.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sunwaee-1.1.7-py3-none-any.whl (104.3 kB view details)

Uploaded Python 3

File details

Details for the file sunwaee-1.1.7.tar.gz.

File metadata

  • Download URL: sunwaee-1.1.7.tar.gz
  • Upload date:
  • Size: 118.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for sunwaee-1.1.7.tar.gz
Algorithm Hash digest
SHA256 011839845e346fcd4b6a8990521ed82fe2244cf3875f761ea213af3f8449cf7e
MD5 2f8aba02d63adb0e1c5fe3d78972b15c
BLAKE2b-256 6cec2f3c22d6e97d620643d578af98583853ed34619aaa0994a7c667cddd2161

See more details on using hashes here.

File details

Details for the file sunwaee-1.1.7-py3-none-any.whl.

File metadata

  • Download URL: sunwaee-1.1.7-py3-none-any.whl
  • Upload date:
  • Size: 104.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for sunwaee-1.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 24e7145dca90e089112ea3c913db84b27916d23a4f799880a9c9991760437c4f
MD5 b04c055e9a69d063ab8257058325a2ab
BLAKE2b-256 26e3e279360dc8938fe8b1c7b6e85de6fc45b8c90741cc8e2b26932f48372134

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page