Skip to main content

Command-line client for agent.ai — search, describe, and chat with AI agents from your terminal.

Project description

agentai — agent.ai from your terminal

A polished, hand-crafted command-line client for agent.ai. Search the marketplace, chat with any agent, run any of the 175+ public actions, build new agents with the LLM-powered assistant, manage your Composio integrations, and pipe everything into scripts — all through the public REST API. Designed to feel like Claude Code or Codex CLI in the terminal.

$ agentai

▌ agent.ai
▌ command-line  v0.1.0

  user    Andrei Z   @andrei
  email   andrei@example.com
  tier    paid  (active)
  weekly  3 / 25 runs
  monthly 17 / 200 runs
  key     signed in  (keyring · 9f05…f0ab)
  base    https://api-lr.agent.ai/v1
  theme   modern
  health  ok

    agentai search 'sales prospecting'    find an agent
    agentai run fluximage                 run an agent (interactive)
    agentai mine                          list agents you've built
    agentai actions list                  browse the 175+ actions catalog
    agentai actions run invoke_llm        interactive prompt-fill + run
    agentai --help                        all commands and flags

Documentation

Install

pipx install agentai-cli       # recommended (isolated, on PATH)
uv tool install agentai-cli    # fastest
pip install --user agentai-cli # universal fallback
brew install agent-ai/tap/agentai      # macOS / Linuxbrew
curl -fsSL https://agent.ai/cli/install.sh | sh   # auto-detect

No Python? Grab a static binary for your platform from the GitHub releases and drop it on your PATH.

Quickstart

agentai login                              # browser → paste key → keyring
agentai status                             # name + tier + usage dashboard

# Discover
agentai search "image generation"
agentai mine

# Run agents
agentai run fluximage                      # interactive REPL (default)
agentai run fluximage -i image_prompt='cat' --json

# Run any of the 175+ catalog actions
agentai actions run invoke_llm \
  -i instructions='Say hello in one word' \
  -i llm_engine=claude-opus-4-7 --skip-optional

# OmniAgent chat — multi-LLM, conversation-aware
agentai chat
agentai chat --model claude-opus-4-7
agentai chat --with-agent fluximage --with-composio

# Build agents from a description
agentai builder new "Summarize the top story on Hacker News every morning"

# Manage Composio integrations
agentai composio list
agentai composio enable github

# Inspect your run history
agentai runs --since 2026-04-01

The full command index is in COMMANDS.md.

What's in the box

Surface Command Notes
Auth + identity dashboard login, logout, status OS keyring, name/email/tier/usage in one round-trip
Discover agents search, mine, describe Algolia-ranked + client-side runs/rating sort + did-you-mean
Run agents run (interactive default + scripted one-shot), chat (alias) Auto-detected from --json / --stdin / --from-file / non-TTY
Run history runs Pagination + agent / date filters + relative timestamps
Action catalog actions list/search/describe/run/tags/update All 175+ catalog actions, schema-driven prompts
Builder builder new/edit LLM-powered authoring with /save, /run, /diff
OmniAgent chat chat (no agent) Multi-LLM, --with-agent, --with-composio
Composio composio list/enable/disable/connect/tools/execute Same toggle store as the web UI
Settings config get/set/path XDG-style config + theme switching

For the full feature ↔ website matrix, see docs/FEATURES.md.

What it can't do (today)

The CLI is API-key-authenticated end-to-end, but a few features remain web-only by design or by surface area:

  • Visual workflow canvas — drag-drop builder UX is web-only.
  • Project rooms / saved-conversation persistence / KB attachments for the OmniAgent — server-side state is web-only today; /projects inside the CLI prints honest "coming soon" placeholders.
  • OAuth flows for connecting Composio integrationsagentai composio connect deep-links to https://agent.ai/user/integrations; finish in the browser, then agentai composio list.
  • Streaming token-by-token outputinvoke_llm and invoke_agent are sync today; spinner with elapsed time + rotating verbs while in flight. Will switch to streaming when the API exposes it.
  • API key generation / rotation, billing, team management — live in the web settings.

Full breakdown in docs/FEATURES.md.

Polish

The CLI is built to feel like a hand-crafted tool, not a script:

  • Welcome screen when you run agentai with no args.
  • Status dashboard showing identity, tier, usage, key source, and API health in one glance.
  • Rotating thinking phrases (Drafting…, Reviewing…, Composing…) with elapsed-time footer, like Claude Code's ✻ X.Ys.
  • Bottom toolbar in chat with the active agent and key bindings.
  • History-based autosuggest so up-arrow + tab work like fish.
  • Did-you-mean suggestions when an agent or action name doesn't resolve.
  • Schema-driven prompts for actions: enums become numbered choices, booleans become y/n, integers/numbers honour minimum/maximum.
  • Three themes (modern, minimal, ascii).
  • Smart response rendering. Image-only HTML (<img src="…" /> — the shape returned by fluximage, imagegen, gpt-image-generation, etc.) is unwrapped into a clean file / url / open card with a copy-pasteable curl -O command. Full HTML reports get tag-stripped to readable text. Markdown stays Markdown. Empty responses show a calm (no output) instead of a blank panel.
  • CLI hint in chat. Every chat session prints the equivalent non-interactive form so you never have to re-discover the flags.
  • Runner-page URL after every run. Each agent response shows view on web → https://agent.ai/agent/<slug>?rid=<run_id> so a cmd-click takes you straight to the same execution on the web. Run ids in agentai runs are OSC-8 hyperlinks. agentai open <slug> and agentai open <run_id> --agent <slug> open the right URL directly.
  • Descriptive prompts. Interactive input collection leads with the human-readable question — the OpenAPI description for actions, or a humanized form of the variable name for agents (image_prompt → "Image prompt", linkedin_url → "LinkedIn URL").

Configuration

agentai config get                        # dump everything
agentai config set base_url https://...   # point at staging
agentai config set default_status all     # search private + team by default
agentai config set theme ascii            # for older terminals
agentai config path                       # show where it lives

Environment variables override the file:

Variable Effect
AGENTAI_API_KEY Use this key for the session.
AGENTAI_BASE_URL Talk to a different public-API host.
AGENTAI_INTERNAL_BASE_URL Talk to a different /api/v2/* host.
AGENTAI_THEME modern (default), minimal, or ascii.
NO_COLOR Disable ANSI colours (also --no-color).

Development

This package lives at packages/agentai-cli/ in the agent.ai monorepo. To hack on it:

cd packages/agentai-cli
python3 -m venv .venv      # Python 3.11+ required
source .venv/bin/activate
pip install -e ".[dev]"
pytest                     # 260+ tests
ruff check src tests

Architecture & distribution rationale: see docs/PLAN.md. What ships in each command vs. the web UI: see docs/FEATURES.md. Per-command details + exit codes + env vars + file locations: see docs/COMMANDS.md.

Releasing

The package supports Python 3.11+ and ships to PyPI as agentai-cli. Routine releases run end-to-end through GitHub Actions via OIDC trusted publishing — no API tokens stored in the repo.

One-command release (recommended)

cd packages/agentai-cli

# Rehearse — bumps in-memory, builds, smoke-tests, no commits.
scripts/release.sh patch --dry-run

# Real release: bump → build → tag → push. CI handles publish.
scripts/release.sh patch       # 0.1.0 → 0.1.1
scripts/release.sh minor       # 0.1.0 → 0.2.0
scripts/release.sh 1.0.0rc1    # explicit version

After scripts/release.sh pushes the agentai-cli-vX.Y.Z tag, the Release workflow:

  1. Builds the sdist + wheel and verifies the tag matches [project].version in pyproject.toml.
  2. Smoke-installs the wheel into a clean venv and runs agentai --version / --help.
  3. Publishes to PyPI through OIDC.
  4. Builds static binaries for macOS (arm64 + x86_64), Linux x86_64, and Windows x86_64 with PyInstaller, attaches them to the GitHub Release.
  5. Bumps the Homebrew tap formula via brew bump-formula-pr.

Manual / offline release

When CI isn't an option (offline release, fast hotfix from a laptop):

scripts/build.sh                       # lint + tests + build + smoke
scripts/publish.sh --test              # upload to test.pypi.org
scripts/publish.sh                     # upload to pypi.org (prompts)

# Or in one shot:
scripts/release.sh patch --publish=test
scripts/release.sh patch --publish=prod

Manual uploads need a PyPI API token in ~/.pypirc or the TWINE_USERNAME=__token__ + TWINE_PASSWORD=pypi-... env vars.

Workflow dispatch (Test PyPI dry-run)

Want to verify the artifacts on Test PyPI without tagging? Trigger the release workflow manually:

  1. Repo → Actions → ReleaseRun workflow.
  2. Leave test_pypi: true.
  3. The workflow builds + uploads to https://test.pypi.org/.

Version bumps without a release

scripts/bump_version.py --print     # show current
scripts/bump_version.py patch       # bump pyproject.toml only
scripts/bump_version.py 0.5.0       # set explicit

This is orthogonal to the release flow — handy for opening a draft PR where you want the bump in the diff but aren't ready to ship.

Setting up trusted publishing (one-time)

For maintainers wiring this up the first time:

  1. Create the project on PyPI and Test PyPI.
  2. Under each project: Settings → Publishing → Add a new publisher.
  3. Owner: agent-ai · repo: agentai-cli · workflow: release.yml · environment: pypi (production) and testpypi (Test PyPI).
  4. Add matching environments in the GitHub repo settings — no secrets needed; OIDC handles auth.

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentai_cli-0.1.0.tar.gz (184.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentai_cli-0.1.0-py3-none-any.whl (179.0 kB view details)

Uploaded Python 3

File details

Details for the file agentai_cli-0.1.0.tar.gz.

File metadata

  • Download URL: agentai_cli-0.1.0.tar.gz
  • Upload date:
  • Size: 184.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.7

File hashes

Hashes for agentai_cli-0.1.0.tar.gz
Algorithm Hash digest
SHA256 722788d6e460ad69f5746d76e3efb9845940c538fb13f7c30d7d3ad55a45ab04
MD5 ae463ffc2a7d6e8e4e1236046136fb2f
BLAKE2b-256 90c09ddd2c691906b12357742345a00a04b0f07d5f10f9de5cae87afb6fd09ef

See more details on using hashes here.

File details

Details for the file agentai_cli-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: agentai_cli-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 179.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.7

File hashes

Hashes for agentai_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ab1850725f76a953a8514b62b0933da7e178cc6a625520c36b2be9a7ee4a02e9
MD5 a566242d53fa5ce0a5c4615bc9bf6acc
BLAKE2b-256 8d761d0034232bd7436bb0c3216a355d9857a29404512923df88d20680a205c9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page