Skip to main content

Python-first Coding Agent API and CLI.

Project description

sagent🪄

sagent logo

Typed Python library and CLI for multi-provider, multi-agent LLM applications.

Python PyPI CI License

Tutorial · Concepts · Providers · Tools · CLI · Sessions · Security · Architecture · API · Streaming · Compaction · Slack · Self-hosted · Examples

from sagent import tools
from sagent.agent import Agent
from sagent.lib.json import json_freeze
from sagent.providers import Google

agent = Agent(
    model=Google.from_env().model("gemini-3.1-pro-preview"),
    system="You are a scientist.",
    tools=[tools.Read(), tools.Glob(), tools.Grep()],
)
result = await agent.run(json_freeze({"prompt": "analyze the CSV in ./data/"}))
print(result.content)

Why sagent exists

Most serious coding agents are CLIs, editor extensions, hosted assistants, or non-Python runtimes. Sagent gives you the agent runtime as typed Python objects you can import, compose, test, and embed. The CLI is one entry point over the library, not the center of the design.

Use Sagent when you want:

  • a Python API first and a CLI second;
  • provider swapping without changing the agent loop;
  • custom tools as normal Python objects;
  • session persistence and compaction;
  • child agents and peer messaging for review, delegation, and map-reduce work.

Three pieces make Sagent distinctive:

  • Hot-swappable providers. The same agent, tools, session, and compactor can run against Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, or an OpenAI-compatible endpoint.
  • Multi-agent primitives. AgentSelf, AgentSpawn, and AgentSend let agents inspect themselves, spawn isolated children, and send messages to named peers.
  • Typed runtime objects. Agent, Tool, Message, Model, and Provider are Python protocols and dataclasses that can be used directly from application code.

What sagent does

  • Runs agents against Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, and OpenAI-compatible endpoints.
  • Exposes tools for local files, shell commands, web access, paper search, and agent coordination.
  • Keeps the same Agent behind CLI, Slack, parent agents, and application code.
  • Represents provider responses, tool calls, tool results, and user messages as typed Message objects.
  • Lets agents call AgentSelf, AgentSpawn, and AgentSend as ordinary tools.

Install

pip install sagent

Sagent requires Python 3.12.

Quickstart: CLI

export GOOGLE_API_KEY=...
sagent --provider Google --model gemini-3.1-pro-preview

For non-interactive use, pipe a prompt on stdin:

printf 'Say hi in one sentence.' | \
  sagent --provider Google --model gemini-3.1-pro-preview \
  --output-format json

Use --continue to resume the most recent session for this working directory, --session PATH for an explicit session directory, or --no-session-persistence when prompts and auto-memory should not be written to disk. Use --max-budget-usd N to cap API spend for the current run.

Quickstart: Python

import asyncio

from sagent import tools
from sagent.agent import Agent
from sagent.lib.json import json_freeze
from sagent.providers import Anthropic


async def main() -> None:
    agent = Agent(
        model=Anthropic.from_env().model("claude-sonnet-4-6"),
        system="You are a concise coding assistant.",
        tools=[tools.Read(), tools.Grep(), tools.Glob()],
    )
    result = await agent.run(json_freeze({"prompt": "Summarize README.md"}))
    print(result.content)


asyncio.run(main())

Agent.run() accepts a JSON directive with a prompt key and returns a Message.

Provider setup

Sagent ships API-key providers for Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, and generic OpenAI-compatible endpoints. Set the key for the provider you plan to use:

export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export GOOGLE_API_KEY=...
export MOONSHOT_API_KEY=...
export DASHSCOPE_API_KEY=...
export MINIMAX_API_KEY=...
Provider Environment variable Example model
Anthropic ANTHROPIC_API_KEY claude-sonnet-4-6
OpenAI OPENAI_API_KEY gpt-5.5
Google GOOGLE_API_KEY gemini-3.1-pro-preview
Moonshot MOONSHOT_API_KEY kimi-k2.6
DashScope DASHSCOPE_API_KEY qwen3.6-plus
MiniMax MINIMAX_API_KEY MiniMax-M2.7
SelfHosted SAGENT_SELFHOSTED_MODEL Qwen/Qwen3.6-27B

See Providers for more detail.

Examples

The examples/ directory contains small, runnable examples:

  • offline_custom_tool.py: run an agent/tool/model loop without API keys.
  • decorator_tool.py: wrap a function as a tool.
  • custom_tool.py: implement the full Tool protocol.
  • multi_agent_reviewer.py: spawn an isolated reviewer child.
  • openai_compatible_provider.py: connect an OpenAI-compatible endpoint.

Start with the tutorial, then use the examples as copyable patterns.

Concepts

Sagent has five core contracts: Message, Tool, Model, Provider, and Agent.

  • Message is the typed payload that crosses providers, tools, sessions, compaction, and UI surfaces.
  • Tool receives a JSON directive in a message and returns a message.
  • Model is the backend request/response interface.
  • Provider owns authentication and constructs models.
  • Agent owns the loop, model, tools, inbox, session, and compactor.

TextMessage is intentionally central: it is the common communication interface across the runtime.

See Concepts and Architecture.

Inbox zero

Most agent frameworks are turn-based: user sends a message, agent processes it, agent responds, repeat. Sagent instead uses a drain-run-check loop:

while True:
    drain inbox into user messages
    call model
    if tool calls exist: dispatch tools and loop
    if inbox is empty and model is done: go idle

The agent goes idle only when the inbox is empty and the model has nothing left to do. It wakes when anything lands in the inbox.

Every surface - REPL, Slack, CLI, parent agent, or application code - puts messages in the same inbox. User input, background task results, and agent-to-agent messages use the same mechanism instead of separate plumbing. User messages go to the front; background and peer messages append at the back.

Context-affecting slash commands follow the same rule. /clear is queued and interpreted at the agent's single inbox drain point. Surface local commands that do not mutate model context, such as /model, may be handled by the REPL before entering the inbox.

Message: typed payloads plus graph edges

Sagent messages use MIME-style descriptors for heterogeneous payloads, plus ids and parent ids for graph structure. The public Message union contains TextMessage, BytesMessage, JsonMessage, and MultipartMessage.

MultipartMessage content is recursive: compound messages hold nested messages. An assistant turn containing text, thinking, and tool calls uses the same message graph as a single text chunk. Descriptors such as text/plain, multipart/x-tool-call, and application/x-done tell callers how to interpret the payload.

Tool: one input message, one output message

Tools are normal Python objects with a small protocol:

class Tool(Protocol):
    name: str
    tool_id: str
    description: str
    directive_schema: JSON
    supports_microcompaction: bool

    def summary(self, msg: Message) -> str: ...
    def prompt(self) -> str | None: ...
    async def run(self, msg: Message) -> Message: ...

Input is a Message with a JSON directive. Output is a Message. Expected tool failures return descriptor="text/x-error" rather than raising through the agent loop.

Agent follows the same interface pattern as a tool. AgentSpawn is a tool that builds a child Agent, runs it, and returns the child's final output as a tool response. That is what makes recursive agent composition work without a separate orchestration layer.

AgentSelf, AgentSpawn, AgentSend

  • AgentSelf lets an agent inspect or mutate its own state: update status, compact context, clear context, change model, inspect diagnostics, and adjust token limits.
  • AgentSpawn creates child agents with explicit tool/depth limits for isolated reviews, subtasks, and map-reduce work.
  • AgentSend delivers a message to another live named agent's inbox. This makes multi-agent coordination peer-to-peer rather than only parent-to-child.

Security and privacy

Sagent is an agent runtime, not a sandbox. Enabled tools run with the current process permissions: Bash executes local commands, file tools read and write accessible paths, and provider/network tools send data to their configured services. Sessions are plaintext local state and may contain prompts, model responses, tool results, file snippets, and paths.

Use narrow tool sets, pass --no-session-persistence for one-off sensitive prompts so sessions and auto-memory are disabled, and run Sagent inside your own OS/container sandbox when a task needs hard isolation. See Security.

Current scope

Sagent does not currently include:

  • MCP integration;
  • LSP integration;
  • native sandboxing;
  • a desktop UI;
  • a tree-sitter repo map;
  • a hosted service;
  • browser automation.

Adjacent projects

This comparison focuses on the runtime shape rather than every feature of each project.

Sagent aider LangChain OpenClaw Cline Claude Code Codex CLI Gemini CLI
Python library 🟡
Multi-provider
Context compaction 🟡 🟡 🟡
User-initiated backend swap
Agent-initiated backend swap 🟡
Agent self-mutation
Context hot-swap 🟡 🟡 🟡 🟡
Recursive agent spawn 🟡 🟡 🟡
Multi-agent (fully detached) 🟡 🟡
GitHub stars (May 2026) -- 44.4k 135.8k 368.6k 61.4k -- 80.1k 103.2k

✅ = yes, 🟡 = partial, ❌ = no. Corrections welcome -- open a PR.

How each project works

aider -- Git-native pair programmer. The LLM emits markdown-formatted edits (14 edit formats) and aider parses them -- there is no structured tool calling. All providers route through litellm as a single string-addressed transport. /model switches the backend mid-session by raising SwitchCoder, which reconstructs the entire Coder object; conversation history carries over but the swap is destructive. A tree-sitter repo map with PageRank ranking provides structural code awareness that Sagent lacks. No multi-agent capabilities beyond a synchronous Architect-to-Editor handoff. Importable via Coder.create() but the scripting API is explicitly unsupported and may change without notice.

LangChain/LangGraph -- Broad Python application framework for LLM pipelines. Multi-provider, multi-agent (via LangGraph state machines), and fully programmatic. Context compaction, backend swapping, and agent self-mutation are all possible but application-defined rather than built-in -- the framework provides building blocks, not an opinionated agent loop. Sagent is a smaller, more opinionated runtime with typed protocols, a concrete inbox loop, and built-in session persistence.

OpenClaw -- Multi-platform personal assistant (desktop, mobile, web) with multi-provider and multi-agent support. Agents coordinate across channels but the system is oriented toward end-user assistant workflows rather than developer tooling. TypeScript-based, not available as a Python library.

Cline -- VS Code extension with multi-provider support. Users can switch models in the settings panel mid-conversation, but the extension is not importable as a library. Single-agent with no spawn or coordination primitives. Context management is truncation-based rather than structured compaction.

Claude Code (Anthropic) -- Closed-source vendor CLI with strong tool-use capabilities and structured context compaction. Agents can spawn recursive sub-agents and compact their own context, but cannot switch providers (Anthropic-only) or dynamically adjust token limits. Not available as a Python library; the SDK is JavaScript. No user-initiated backend swap since there is only one backend.

Codex CLI (OpenAI) -- Rust-based CLI locked to OpenAI models. Single-agent, single-provider, no compaction, no programmatic API. Clean local-execution model with sandboxing, but no extensibility surface for custom tools, provider swapping, or multi-agent coordination.

Gemini CLI (Google) -- TypeScript CLI locked to Google models. Has context compaction via summarization. Single-agent, single-provider, no programmatic API, no custom tool protocol. Designed as a terminal interface for Gemini, not as a composable runtime.

Architecture map

Module Role
bin/cli.py Terminal entry point
bin/slack.py Slack Socket Mode entry point
agent/ Turn loop, retry, dispatch, sessions
compactor.py Structured compaction and prompt-too-long retry
custom_types.py Message, Tool, Model, Provider protocols
providers/ Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, OpenAI-compatible
tools/ Built-in tools for files, shell, web, search, and agent coordination
repl/ prompt_toolkit REPL and diff rendering
sessions.py Per-cwd session storage
prompt.py System prompt assembly

Name

sagent (noun, neologism) /ˈseɪ.dʒənt/

From sage + agent.

An AI assistant that confidently performs a task you didn't ask for while ignoring the one you did.

"I asked the sagent to fix one failing test -- it deleted the test and reported all green."

Contributing

See CONTRIBUTING.md for local validation and public contribution flow.

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sagent-0.1.2.tar.gz (708.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sagent-0.1.2-py3-none-any.whl (332.1 kB view details)

Uploaded Python 3

File details

Details for the file sagent-0.1.2.tar.gz.

File metadata

  • Download URL: sagent-0.1.2.tar.gz
  • Upload date:
  • Size: 708.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sagent-0.1.2.tar.gz
Algorithm Hash digest
SHA256 86cf9ebab63ab64eaa73cc33c3c845fe7b354d3abf04294cd35ef1ae08d3a64a
MD5 2cd6f0eff463323204ba44bbcf92d06c
BLAKE2b-256 99f4d29dd2b51450568de5355c340def656938f20645b4affd0b073a5e04d98a

See more details on using hashes here.

Provenance

The following attestation bundles were made for sagent-0.1.2.tar.gz:

Publisher: publish-pypi.yml on rekursiv-ai/sagent

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sagent-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: sagent-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 332.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for sagent-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2994cf05cfb24912be422a6b1fdbe530a922158e254e4e06e053517928c60447
MD5 287726444715bdbf27b65aa884c44d51
BLAKE2b-256 50b98fd683f618aec9d7437bb740f0e0d8fad8e88bb68e5151e304dc10ea12ef

See more details on using hashes here.

Provenance

The following attestation bundles were made for sagent-0.1.2-py3-none-any.whl:

Publisher: publish-pypi.yml on rekursiv-ai/sagent

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page