Command-line client for agent.ai — search, describe, and chat with AI agents from your terminal.
Project description
agentai — agent.ai from your terminal
A polished, hand-crafted command-line client for agent.ai. Search the marketplace, run any of the 5,000+ agents, drive any of the 175+ public actions, build new agents with the LLM-powered assistant, manage your Composio integrations, and pipe everything into scripts — all through the public REST API. Designed to feel like Claude Code or Codex CLI in the terminal.
$ agentai
▌ agent.ai
▌ command-line v0.1.0
user Andrei Z @andrei
email andrei@example.com
tier paid (active)
weekly 3 / 25 runs
monthly 17 / 200 runs
key signed in (keyring · 9f05…f0ab)
base https://api-lr.agent.ai/v1
theme modern
health ok
agentai search 'sales prospecting' find an agent
agentai run fluximage run an agent (interactive)
agentai chat OmniAgent — multi-LLM chat
agentai mine list agents you've built
agentai actions list browse the 175+ actions catalog
agentai actions run invoke_llm interactive prompt-fill + run
agentai --help all commands and flags
Status Beta. The CLI is feature-complete for everything reachable by the public API key (auth, discovery, run, chat, build, runs history, Composio, projects). Feature-by-feature parity with the website is tracked in docs/FEATURES.md.
Table of contents
- Why this CLI
- Install
- Quickstart
- Command index
- Authentication
- Discovery:
search,mine,describe - Running agents:
run&chat - OmniAgent chat (multi-LLM, projects, tools)
- Run history:
runs,open - Action catalog:
actions * - Builder:
builder new/edit - Composio integrations:
composio * - Configuration:
config, env vars, files - Output formats & scripting
- Themes & UX polish
- Entitlements & analytics
- Exit codes
- Architecture (brief)
- Development
- What's web-only today
- License
Why this CLI
| Capability | What you get |
|---|---|
| Single binary | pipx install agentai-cli, then agentai is on your PATH. |
| API-key end-to-end | Every feature works with your personal API key. No browser dance, no Auth0 popups, no hidden web-only routes — paste a key once and you're done. Proper dual-auth was added to every route the CLI touches (/me, /agent_runs, builder, Composio, projects, /agent/run). |
| Web parity, not "API parity" | When you agentai run gpt-5, the run shows up in /user/agent-runs exactly like a web run. The ?rid=<run_id> link in the output loads the same trace the web shows. Entitlements (free-tier weekly limits, paid-only agents) are enforced server-side via the same EntitlementService the web uses. |
| First-class scripting | Every command supports --json, every command exits with stable, documented codes (incl. 7 for entitlement-denied with the schema payload). Pipes work: cat article.txt | agentai run summarizer --stdin. |
| Schema-driven UX | Interactive prompts read the OpenAPI spec — descriptions, enums, defaults, types — and render numbered choice menus, y/n booleans, validated numbers. No more guessing what an action's llm_engine accepts. |
| OmniAgent-style chat | agentai chat opens a multi-LLM REPL with /model, /projects, /connections, /tools slash commands that work inline — no quitting to the shell. |
| No magic | Every endpoint lives in src/agentai/endpoints.py. Every error has a typed exception. Every output goes through format_response.py so HTML reports, image responses, JSON payloads, Markdown all render cleanly. |
Designed in the spirit of claude and codex: rounded panels,
rotating thinking phrases, persistent prompt history, did-you-mean
hints, OSC-8 hyperlinks for run ids, three themes (modern /
minimal / ascii).
Install
pipx install agentai-cli # recommended (isolated, on PATH)
uv tool install agentai-cli # fastest
pip install --user agentai-cli # universal fallback
brew install agent-ai/tap/agentai # macOS / Linuxbrew
curl -fsSL https://agent.ai/cli/install.sh | sh # auto-detect (pipx → uv → pip → static binary)
Requires Python 3.11 or newer. No Python at all? Grab a static binary for your platform from the GitHub releases — macOS (arm64 + x86_64), Linux x86_64, Windows x86_64 are all built on every tag.
First-run sanity check:
agentai --version
agentai login # browser → paste key → keyring
agentai status # one-glance dashboard
Quickstart
# 1. Sign in (one-time)
agentai login # browser → paste key → OS keyring
# 2. Discover
agentai search "image generation" # marketplace search, ranked + sorted
agentai mine # your private + team agents
# 3. Run agents — interactive by default, scripted on demand
agentai run fluximage # opens the REPL (default)
agentai run fluximage -i image_prompt='a corgi at the beach' --json
agentai run summarizer --from-file article.json
echo "summarize this…" | agentai run summarizer --stdin
# 4. Chat — OmniAgent-style multi-LLM
agentai chat # gpt-5-mini default
agentai chat --model claude-opus-4-7
agentai chat --with-agent fluximage --with-composio
# 5. Drive any of the 175+ catalog actions directly
agentai actions list # browse
agentai actions run invoke_llm \
-i instructions='Say hello in one word' \
-i llm_engine=claude-opus-4-7 \
--skip-optional --json
# 6. Build agents from a description
agentai builder new "Summarize the top story on Hacker News every morning"
# 7. Manage Composio integrations
agentai composio list
agentai composio enable github
# 8. Inspect your run history
agentai runs --since 2026-04-01
agentai open <run_id> # opens the runner page in the browser
Command index
Every command supports
--helpfor full flag/arg details anddocs/COMMANDS.mdis the long-form reference.
| Surface | Commands | What it does |
|---|---|---|
| Auth + identity | login, logout, whoami, status, version |
OS-keyring storage, identity dashboard with name / email / tier / weekly + monthly usage. |
| Discovery | search, mine, describe |
Marketplace search (Algolia + client-side runs → rating sort), your own private/team agents, full input schema for any agent. |
| Run agents | run, chat |
Interactive REPL by default; scripted one-shot when --once/--json/--stdin/--from-file is set or stdin isn't a TTY. |
| OmniAgent chat | chat (no agent), open |
Multi-LLM chat with inline /model, /projects, /connections, /tools commands. open deep-links to runner pages. |
| Run history | runs, open |
List with pagination, agent/date filters, relative timestamps, OSC-8 hyperlinks. |
| Action catalog | actions list/search/describe/run/tags/path/update |
Browse + run any of the 175+ public actions; schema-driven prompts. |
| Builder | builder new, builder edit |
LLM-powered authoring REPL with /save, /run, /diff, /describe, /url, /web, /rename, /inspect, /clear. |
| Composio | composio list/enable/disable/connect/tools/execute |
Same toggle store as the web UI; tools previews LLM-planned envelopes. |
| Settings | config get, config set, config path |
XDG-style config file, theme switching, base-URL overrides for staging. |
Global flags (work on every command)
| Flag | Env var | Notes |
|---|---|---|
--api-key TEXT |
AGENTAI_API_KEY |
Override the stored key for this invocation. |
--base-url URL |
AGENTAI_BASE_URL |
Public-API host (default https://api-lr.agent.ai/v1). |
--internal-base-url URL |
AGENTAI_INTERNAL_BASE_URL |
Flask host serving /api/v2/* + /api/v3/* (default https://api.agent.ai). |
--theme {modern,minimal,ascii} |
AGENTAI_THEME |
Visual theme. |
--no-color |
NO_COLOR |
Disable ANSI colour. |
--timeout FLOAT |
— | HTTP timeout in seconds (default 120). |
--version, -V |
— | Print version and exit. |
--help, -h |
— | Per-command help. |
Authentication
The CLI uses your personal agent.ai API key (not Auth0). Every
internal route the CLI touches goes through the
require_auth_or_api_key decorator on the backend, so the same key
authenticates the public action API, the OmniAgent endpoints, the
builder, the runs page, and the projects API.
agentai login # browser auto-opens; paste key
agentai login --no-browser # headless: just prompt
agentai logout # clear from keyring
agentai status # session dashboard (alias: agentai whoami)
agentai status --json # machine-readable
status resolves a single round-trip envelope from /api/v2/users/me
and renders:
user Andrei Z @andrei
email andrei@example.com
tier paid (active)
weekly 3 / 25 runs
monthly 17 / 200 runs
key signed in (keyring · 9f05…f0ab)
base https://api-lr.agent.ai/v1
theme modern
health ok
Storage: OS keyring (Keychain / Credential Manager / Secret
Service). Falls back to a 0600-mode file at
~/.config/agentai/credentials.toml on headless boxes.
Discovery
agentai search "competitive analysis"
agentai search --status all --limit 50 sales
agentai search "image generation" --sort rating
agentai search --json | jq '.[].function.metadata.slug'
agentai mine # private (default)
agentai mine --include-team # private + team
agentai mine "content" # filter your agents
agentai describe fluximage
agentai describe content-planner --json
search results are de-duplicated on agent_id and sorted client-side
by popular (runs → rating, the default), rating, runs, name,
or relevance (server order). describe calls describe_agent
first, then probes search (private → team → public) to merge richer
JSON-schema parameters when available — descriptions, enum values, and
defaults flow through to the prompts you'll see in run.
Running agents
agentai run AGENT [OPTIONS]
The single agent verb. Defaults to interactive REPL when stdin is a TTY and no scripted-intent flag is set; auto-falls-back to one-shot otherwise.
# Interactive (default)
agentai run fluximage
agentai run content-planner -i topics='ai security' # prefill, then REPL
# One-shot (auto-detected)
agentai run fluximage -i image_prompt='cat' --json | jq .response
agentai run my_workflow --from-file inputs.json
echo "summarize this article…" | agentai run summarizer --stdin
# One-shot (explicit)
agentai run fluximage -i image_prompt='cat' --once
| Flag | Notes |
|---|---|
-i / --input KEY=VALUE |
Repeatable. Pre-fills the REPL or runs once when combined with --once. |
-f / --from-file PATH |
JSON or key=value file. Implies --once. |
--stdin |
Read a single value from stdin. Implies --once. |
-1 / --once |
Force one-shot. |
--json |
Implies --once; emits raw envelope. |
Why agentai run ≠ "an API call"
The CLI hits POST /v1/agent/run — the proper user-runs path —
not /v1/action/invoke_agent (which is the agent-as-tool bucket).
That means:
- Run shows up in
agentai runsand the web/user/agent-runspage. The run is attributed to the actual agent viaAgentFlow.execute, not the API agent. - The
?rid=<run_id>URL works. Each response printsview on web → https://agent.ai/agent/<slug>?rid=<run_id>as an OSC-8 hyperlink (cmd-click in iTerm2, ⌘-click in Ghostty, right-click → Open URL in others). Loads the same run trace the web shows. - Entitlements + analytics are enforced server-side. Free-tier
weekly limits, paid-only agents, OmniAgent quotas — all checked.
Mixpanel + PostHog + RudderStack + HubSpot all see the run with
source="cli"so dashboards can slice CLI vs web vs scheduler vs webhook.
REPL slash commands (inside run or chat AGENT)
| Command | Action |
|---|---|
/help |
List slash commands. |
/inputs |
Show declared agent inputs and what's already filled. |
/save FILE.md |
Save Markdown transcript. |
/export FILE.json |
Export raw turns as JSON. |
/clear |
Forget the in-memory transcript. |
/quit · Ctrl-D |
Leave. |
Bottom toolbar shows the active agent + key bindings. Ctrl-L clears
and re-renders. Esc Enter inserts a newline. Ctrl-C cancels the
current turn (without exiting).
Smart response rendering
Every response goes through the same formatter:
- Image-only HTML (
<img src="…" />fromfluximage,imagegen,gpt-image-generation) → unwrapped into afile / url / opencard with a copy-pasteablecurl -O. - Full HTML reports → tag-stripped to readable text, links and images collected at the bottom.
- Markdown → rendered.
- JSON / dict / list → pretty-printed.
- Empty → calm
(no output)instead of a blank panel. - Run id surfaced in the panel title (
agent · run xxxxxxxx).
OmniAgent chat
agentai chat (with no agent argument) opens the OmniAgent — the
same multi-LLM chat surface the web's /agent/chat page provides.
Tools are discovered automatically per-turn based on your
prompt: the server's planner picks the most relevant agents and
Composio toolkit actions, and the LLM gets them as callable
functions on the next turn. Pin extras with --with-agent /
--with-composio; freeze the catalogue with --no-auto-discover.
agentai chat # gpt-5-mini default + auto-discovery on
agentai chat --model claude-opus-4-7
agentai chat --no-auto-discover # predictable scripted sessions
agentai chat --with-agent fluximage --with-agent content-planner
agentai chat --model gpt-5.2 --with-composio # eager Composio + auto-discovery still on
| Flag | Notes |
|---|---|
--model / -m NAME |
Any of the 45+ invoke_llm engines. Default gpt-5-mini. |
--with-agent SLUG |
Repeatable. Pins the agent for the session — guaranteed available even if the planner doesn't surface it. Auto-discovery still runs. |
--with-composio |
Eagerly pre-loads every enabled Composio toolkit at startup as pinned tools. (Without the flag, Composio tools are still surfaced per-turn by auto-discovery.) |
--no-auto-discover |
Disables per-turn discovery. Catalogue stays frozen at the pinned set. |
Auto-discovery (the default)
Every turn runs the same three-call pipeline the web's /agent/chat
legacy orchestration uses, in parallel where possible:
POST /api/v2/chat-agent/agents/plan→ server picks relevant saved / team / premium agents for your prompt.POST /api/v2/chat-agent/composio/tools→ planner narrows the Composio universe to tools that fit your prompt.POST /api/v2/agents/openapi-functions→ converts each picked agent into an OpenAI-formatinvoke_agent_<id>tool envelope with proper input parameters.
Pinned tools are always merged in. Discovery failures degrade
gracefully — a planner 401 doesn't kill the chat. Use /discover status to see what just happened, /discover now to preview the
catalogue without sending a real message, /discover off to lock
it, and /tools to see the pinned vs discovered breakdown.
Slash commands inside the OmniAgent
OmniAgent commands
/model [{name|list}] show, list, or switch the LLM engine
/tools [list] tools available this session (with provenance)
/discover [on|off|status|now]
per-turn auto-discovery of relevant agents + Composio tools
/connections [list|enable <slug>|disable <slug>|reload]
manage Composio toolkits inline
/projects [list|new <name>|use <ref>|info|clear|delete <ref>]
chat-agent project scoping (KB + custom instructions)
/clear forget the in-memory transcript
/save FILE.md save Markdown transcript
/export FILE.json export raw turns
/quit · Ctrl-D leave the chat
/model
you › /model # show current
you › /model list # list every supported LLM (read from OpenAPI spec)
you › /model claude-opus-4-7 # switch (validated; did-you-mean on typos)
Unknown engines get a Did you mean …? hint instead of silently
corrupting the model and breaking the next turn. The catalog is read
from the bundled OpenAPI spec on first use and cached for the session.
/connections
you › /connections # list (alias: /connections list)
you › /connections enable github # toggle ON + reload tools
you › /connections disable slack # toggle OFF + drop from catalogue
you › /connections reload # re-pull live tool list
Enabling a toolkit PUTs the toggle and reloads the in-session
tool catalogue so the LLM picks up the new state on the next turn.
/projects — chat-agent project scoping
you › /projects # list (active project marked with ●)
you › /projects new Customer research # create + activate
you › /projects use 2 # 1-based index from last list
you › /projects use customer # case-insensitive substring
you › /projects use proj-9f3a01… # exact id
you › /projects info # current project metadata
you › /projects clear # deactivate
you › /projects delete proj-9f3a01… # hard-delete (y/N confirm)
When a project is active, every turn:
- Calls
POST /api/v2/chat-agent/projects/kb/querywith the user's message andtop_k=5. - Folds the project's
custom_instructionsand the KB hits into the system prompt. - The bottom toolbar shows
proj: <name>so you always know the LLM is seeing project context.
Same enrichment the web's project-scoped chat performs — same KB, same custom instructions, same wire format.
Tool augmentation protocol
Each turn, the CLI prepends a system instruction listing every available tool (slug, kind, description, args). The LLM either answers in plain text or emits one line of the form
[CALL_TOOL: <slug>] {"key": "value", ...}
The CLI parses the line, dispatches the tool
(/action/invoke_agent for agents — the agent-as-tool bucket
distinct from agentai run's user-runs path; composio_execute for
Composio), feeds the result back into the next prompt, and asks the
LLM for a final natural-language answer. The protocol is auditable
from the terminal and works regardless of which LLM you choose.
Run history
agentai runs # last 25
agentai runs "weekly digest" # search
agentai runs --agent content-planner --since 2026-04-01
agentai runs -n 50 -p 2 # 50 per page, page 2
agentai runs --json | jq '.data[].run_id'
# Open a run in the browser
agentai open <run_id> # auto-resolves the agent
agentai open <run_id> --agent fluximage # explicit
agentai open fluximage # → /agent/fluximage (no rid)
agentai open <slug> --print # print the URL, don't open
| Flag | Notes |
|---|---|
--agent / -a SLUG |
Resolves slug → id client-side. |
--page / -p N |
1-indexed. |
--per-page / -n N |
Up to 100. |
--since YYYY-MM-DD |
UTC. |
--until YYYY-MM-DD |
UTC. |
--json |
Raw envelope. |
The When column renders compact relative timestamps (5m ago,
3h ago, 2d ago). Run ids are OSC-8 hyperlinks — cmd-click to
open the runner page directly.
Action catalog
Every public POST /action/<NAME> endpoint is browsable, searchable,
and runnable from the CLI. The catalog ships bundled (so the CLI
works offline) and can be refreshed against the live OpenAPI spec.
agentai actions list # all 175+, grouped by tag
agentai actions list --tag "AI & Content Generation"
agentai actions list --flat --limit 10
agentai actions search "search results"
agentai actions search summarize --limit 5
agentai actions describe get_search_results
agentai actions describe invoke_llm --json
agentai actions run get_search_results # full interactive
agentai actions run invoke_llm \
-i instructions='Say hello in one word' \
-i llm_engine=claude-opus-4-7 \
--skip-optional --json
agentai actions run grab_web_text \
-i url=https://example.com --skip-optional
agentai actions tags # tag → action count
agentai actions update # refresh catalog from server
agentai actions update --from https://staging.../openapi.json
agentai actions path # ~/.config/agentai/openapi.json
actions run flag |
Notes |
|---|---|
-i / --input KEY=VALUE |
Repeatable. |
-f / --from-file PATH |
JSON or key=value. |
--skip-optional |
Don't prompt for optional fields; use defaults. |
--json |
Raw response. |
Schema-driven prompts: enums become numbered choice menus; booleans
become y/n; integers and numbers honour minimum/maximum; arrays
accept comma-separated values or JSON; unknown fields drop with a
warning before they hit the wire. Did-you-mean fires on typos:
agentai actions describe invoke_lmm →
"Did you mean: invoke_llm, invoke_agent?".
Builder
LLM-powered agent authoring without leaving the terminal. Calls the
same /api/v2/generative_agent/* endpoints the web builder uses.
agentai builder new "Summarize the top story on Hacker News every morning"
agentai builder edit my-news-agent
Slash commands inside the builder
| Command | Action |
|---|---|
/save [flags] |
Create or update the agent. Flags: --name 'X', --description 'X', --public / --private / --team. |
/run [-i k=v …] |
Run the saved agent — delegates to the same shared runner agentai run <slug> uses. |
/describe |
Render the standard agent card (name, slug, inputs, outputs, web URL). |
/url |
Print the runner-page URL. |
/web |
Open the runner page in the default browser. |
/inspect |
Pretty-print the proposed actions JSON. |
/diff |
+ / - / ~ action changes since last save. |
/rename '<new name>' |
Rename without saving. |
/clear |
Discard the in-memory draft (confirms when dirty). |
/help, /quit |
List commands · leave (confirms when dirty). |
The bottom toolbar shows the draft name + slug + visibility +
✓ saved / ⚠ unsaved flag + the backend host you're talking to —
useful when bouncing between --internal-base-url overrides during
local backend development. After save, prints the runner URL + a
ready-to-paste agentai run <slug> command.
Composio integrations
Same toggle store as the web UI. OAuth flows can't run in a TTY, so
composio connect deep-links to the integrations page.
agentai composio list # connected toolkits + chat-toggle state
agentai composio connect # → https://agent.ai/user/integrations
agentai composio enable github
agentai composio disable slack
# Preview the LLM-planned tool envelopes for a prompt (planner runs server-side)
agentai composio tools "post a slack update with our weekly metrics"
# Smoke-test a single tool directly
agentai composio execute github_search_repositories -i query=agentai
agentai composio execute slack_post_message --from-file payload.json --json
composio list shows the chat-toggle state per toolkit; composio tools is the same planner the OmniAgent uses internally — useful for
seeing exactly which tools an LLM would receive for a given prompt.
Configuration
Config file
Lives at ~/.config/agentai/config.toml (resolved through
platformdirs so it works on macOS, Linux, and Windows).
agentai config get # dump everything
agentai config get base_url
agentai config set theme ascii # switch theme
agentai config set default_status all # search private + team by default
agentai config set base_url https://api-lr-staging.agent.ai/v1
agentai config path # show file location
Known keys: base_url, internal_base_url, web_base_url,
default_status, theme, timeout, telemetry_enabled.
Environment variables (override the file)
| Variable | Default | Effect |
|---|---|---|
AGENTAI_API_KEY |
— | Use this key for the session. |
AGENTAI_BASE_URL |
https://api-lr.agent.ai/v1 |
Public-API host. |
AGENTAI_INTERNAL_BASE_URL |
https://api.agent.ai |
Internal Flask host (/api/v2/* + /api/v3/*). |
AGENTAI_WEB_BASE_URL |
https://agent.ai |
Web origin for runner / profile URLs. |
AGENTAI_THEME |
modern |
modern, minimal, ascii. |
NO_COLOR |
unset | Any value disables ANSI colour. |
File locations
| Path | Contents |
|---|---|
~/.config/agentai/config.toml |
Non-secret prefs. |
~/.config/agentai/credentials.toml |
API key (only when keyring is unusable; mode 0600). |
~/.config/agentai/openapi.json |
User-local action catalog override (after actions update). |
~/.local/state/agentai/history/ |
Per-agent and per-session prompt history. |
Output formats & scripting
Everything that prints panels in interactive mode also has a
--json mode that emits the raw envelope. Use it in CI:
# Capture a run as JSON
agentai run my_agent -i message=hi --once --json > result.json
jq -r .response result.json
# Search + filter to slugs
agentai search "image" --json | jq -r '.[].function.metadata.slug'
# Branching on exit code
if ! agentai run my_agent -i x=y --once --json > out.json; then
case $? in
2) echo "auth — refresh AGENTAI_API_KEY"; exit 1 ;;
4) echo "agent missing — slug changed?"; exit 1 ;;
5) echo "rate-limited; retry later" ;;
7) echo "entitlement — upgrade or wait for reset" ;;
*) echo "unexpected error"; exit 1 ;;
esac
fi
--json implies --once for run (so you never accidentally
deadlock on a missing TTY). --stdin and --from-file also imply
--once.
Themes & UX polish
- Welcome screen when you run
agentaiwith no args. - Status dashboard showing identity, tier, usage, key source, and API health in one glance.
- Rotating thinking phrases (Drafting…, Reviewing…, Composing…)
with elapsed-time footer, like Claude Code's
✻ X.Ys. - Bottom toolbar in chat / builder / OmniAgent showing active agent / model / project + key bindings.
- Persistent history — up-arrow + tab work like fish shell;
per-agent and per-session files under
~/.local/state/agentai/. - Did-you-mean suggestions on typos for agents, actions, and models.
- Schema-driven prompts — enums → numbered choice menus, bool →
y/n, integers/numbers honour
min/max. - Three themes (
modern,minimal,ascii). - Smart response rendering — image cards, HTML stripping, Markdown rendering, calm empty state.
- Runner-page URLs as OSC-8 hyperlinks, after every run and in
every
agentai runsrow. - Descriptive prompts —
linkedin_urlbecomes "LinkedIn URL",image_promptbecomes "Image prompt"; OpenAPI descriptions lead the prompt label, with the technical key as a hint underneath.
Entitlements & analytics
The CLI hits the same enforcement points the web does, no shortcuts:
- Per-agent entitlements.
EntitlementService.check_agent_accessruns server-side insideapi_agent_run_internalbefore any execution. Denials return HTTP 402 with theEntitlementSchemapayload — the same shape the web's pricing modal consumes. The CLI maps that to a typedEntitlementError(exit code7) and renders a structured panel with remaining runs, reset time, and a "manage subscription" link. - Free-tier weekly + monthly run caps are enforced by the same
BillingServicethe web uses;agentai statusshows your usage. - Canonical analytics (
Agent Run Initiated,Agent Run Completed) fire from the sameutils/analytics.pyhelpersAgentRunSteps.post_savealready used. The CLI sets aUser-Agent: agentai-cli/<version>header and the v1 route's_detect_run_sourcetranslates that intosource="cli"so Mixpanel / PostHog / RudderStack / HubSpot dashboards can slice CLI usage separately from web / scheduler / webhook / partner.
There is no way to bypass entitlements from the CLI; the only opt-out
is enforce_entitlements=False, available only to
internal/admin tooling like the runs simulator.
Exit codes
The CLI uses these codes consistently — script with confidence:
| Code | Meaning | Source |
|---|---|---|
0 |
Success | — |
1 |
Generic error | AgentAIError |
2 |
Missing or invalid auth / argument | AuthError, bad flag value |
4 |
Resource not found (agent, action) | NotFoundError |
5 |
Rate-limited after retries exhausted | RateLimitedError |
6 |
Network / connection error | NetworkError |
7 |
Entitlement denied — paid agent without subscription, or free-tier limit hit | EntitlementError (HTTP 402; panel shows remaining runs / reset / upgrade link) |
130 |
Cancelled (Ctrl-C / SIGINT) | — |
Architecture (brief)
For a full code-level orientation see
docs/ARCHITECTURE.md. Three principles:
endpoints.pyis the only place URLs live. Refactor diffs are grep-friendly and tests share the same constants.api.pyis the only HTTP client. Three logical APIs (public actions / internal v2 / internal v3) behind oneAgentAIClientwith shared retries, error mapping, and auth.cli.pyis thin glue. Business logic lives in helpers (runner,auth,builder,omniagent,composio_mod,format_response, …) so it stays import-cheap and testable without standing up Typer.
Three hosts, three roles:
| Host | Default | What it serves |
|---|---|---|
base_url |
https://api-lr.agent.ai/v1 |
Public API: /action/* + /agent/run (proper user-runs path). |
internal_base_url |
https://api.agent.ai |
Flask backend for /api/v2/* + /api/v3/* — OmniAgent / builder / composio / runs / projects / users/me. |
web_base_url |
https://agent.ai |
Next.js frontend; used for runner / profile / integrations URLs. |
Two ways to run an agent
| Method | Backend path | Run record | Visible in /user/agent-runs? |
|---|---|---|---|
client.invoke() |
POST /v1/agent/run |
Attributed to the actual agent | ✅ yes |
client.invoke_action() |
POST /v1/action/invoke_agent |
Attributed to the API agent | ❌ filtered out |
The CLI uses invoke() for agentai run / chat AGENT / builder
/run. invoke_action() is reserved for OmniAgent's tool
dispatcher (genuinely "agent calls another agent") so those calls
don't pollute your runs page.
Development
cd packages/agentai-cli
python3 -m venv .venv # Python 3.11+ required
source .venv/bin/activate
pip install -e ".[dev]"
pytest # 280+ tests
ruff check src tests
Project layout:
src/agentai/
├── __init__.py # version
├── __main__.py # `python -m agentai`
├── _options.py # reusable Typer Annotated option types
├── actions.py # ActionRegistry: parses bundled OpenAPI spec
├── api.py # AgentAIClient — single HTTP client, three hosts
├── auth.py # login / logout / whoami flows
├── branding.py # themes + accent palette + glyphs
├── builder.py # LLM-powered agent builder REPL
├── cli.py # Typer commands — thin wiring only
├── composio.py # Composio envelope parsing helpers
├── config.py # config.toml + keyring + URL resolution
├── data/openapi.json # bundled action catalog snapshot
├── endpoints.py # ⭐ single source of truth for every URL path
├── errors.py # typed exceptions with exit codes
├── format_response.py # detect + render agent response payloads
├── inputs.py # key=value parsing + humanize_field
├── omniagent.py # /agent/chat-style multi-LLM REPL
├── prompts.py # schema-aware action input prompting
├── runner.py # run_once + chat REPL for agents + run_action
└── ui.py # Rich panels, tables, spinners, banners
Tests live in tests/test_*.py (280+ cases) with shared mock helpers
in tests/_helpers.py. The mock_lookup_agent(httpx_mock, …)
helper covers the typical describe_agent + search lookup path
used by 90% of agent-running tests.
Maintainers: the release workflow, version-bump tooling, and publishing checklists live in
docs/RELEASING.md.
What's web-only today
These live exclusively in the web UI; there's no CLI surface yet:
- Visual workflow canvas (drag-drop action editing, branching).
- API key generation and rotation (security boundary — generate at https://agent.ai/user/integrations#api).
- Email verification, payment / billing portal (PCI-scoped).
- Team management (invite, roles, billing seats).
- Marketplace browse + reviews UI (we render review counts and scores; we don't post reviews).
- Analytics dashboards (per-agent run trends, error analysis, evals).
- OAuth flows for connecting Composio integrations —
agentai composio connectdeep-links to the web settings. - Streaming token-by-token output —
invoke_llmandinvoke_agentare sync today; spinner with elapsed time + rotating verbs while in flight. Will switch to streaming when the API exposes it.
Full breakdown in docs/FEATURES.md.
Roadmap
What's coming next (each will keep the same dual-auth + theming + prompts UX):
agentai runs <run_id>— show the per-step trace + outputs of a single run.- Project file uploads via
agentai chat/projects upload FILE. agentai builder publish SLUG --visibility public— flip an agent'sshareSettingfrom the CLI.- Streaming
invoke_llm+invoke_agent— token-by-token rendering when the API gains it. agentai team list / invite / remove— once the backend exposes team management to API keys.
Track progress via docs/FEATURES.md and the implementation plan.
License
MIT. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentai_cli-0.2.0.tar.gz.
File metadata
- Download URL: agentai_cli-0.2.0.tar.gz
- Upload date:
- Size: 214.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0be13d23c445848cd1e150a7426c8af01d7dbc345eeb293f7643ab5197dc6ca6
|
|
| MD5 |
afc115c5b27c5fc3b4af0a7cb930a82f
|
|
| BLAKE2b-256 |
d5307349a180a673b0e57e446252ce538fe776a020b3432705af59f58282d03e
|
File details
Details for the file agentai_cli-0.2.0-py3-none-any.whl.
File metadata
- Download URL: agentai_cli-0.2.0-py3-none-any.whl
- Upload date:
- Size: 198.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f4f2fcf3b8530d96f6c4a805d9710ca93ef246570172f426803f1269bc99f6d6
|
|
| MD5 |
43b6d177f9c9b31f2abe1dd5c2435aec
|
|
| BLAKE2b-256 |
199766712a24185c3cd1626a8a576ecc574e7c8fe651062a6bc35c6fda238694
|