Skip to main content

Status-aware LLM routing with pre-flight checks and auto-fallback for more reliable agents and coding CLIs.

Project description

aistatus

Status-aware LLM routing for more reliable agents and coding CLIs.

aistatus is a Python SDK that checks provider and model availability through aistatus.cc, picks a healthy route, and then calls your installed provider SDK directly. Prompts and API keys stay in your own process. aistatus only helps with status checks, routing, and fallback selection.

This package is useful when you are building:

  • multi-step agents that can fail if one model call breaks mid-run
  • coding CLIs that need stable model access during edit, retry, and repair loops
  • internal tools that want graceful failover across multiple providers

Why This Package Exists

Agent workflows are brittle when they assume one provider is always healthy. That brittleness gets worse in long-running pipelines: a research agent, coding assistant, or automation bot might make 10 to 50 model calls in one task. If a single provider is degraded or temporarily unavailable, the whole run can fail.

aistatus adds a small routing layer in front of those calls:

  • do a pre-flight health check before dispatching a request
  • select a compatible fallback when the primary route is unavailable
  • keep one Python API even when you use multiple providers
  • return routing metadata so your app can observe fallback behavior

In practice, that means better stability for Agent systems and Coding CLI tools: the workflow can keep moving instead of failing hard on one provider incident.

How It Works

  1. aistatus auto-discovers providers from environment variables, or you can register providers manually.
  2. Before sending a request, it queries aistatus.cc for provider or model status and compatible alternatives.
  3. If the primary route is healthy, it uses it.
  4. If the primary route is unavailable, or a provider call fails, aistatus can automatically try the next available provider.
  5. The actual LLM request is executed through the provider SDK installed in your environment, not proxied through aistatus.
  6. You get back a unified RouteResponse with the chosen model, provider, and fallback metadata.

If the status API is unreachable, the router falls back to model-prefix guessing and only uses adapters that are available locally.

What You Get

  • Real-time pre-flight checks for providers and models
  • Automatic fallback across compatible providers
  • Tier-based routing for fast / standard / premium model groups
  • One sync and async API across multiple model vendors
  • Direct provider SDK calls with local API keys
  • Auto-discovery from standard environment variables
  • Manual registration for custom or self-hosted OpenAI-compatible endpoints
  • Unified response metadata for logging and reliability analysis

Supported Providers

Current built-in adapters cover:

  • Anthropic
  • OpenAI
  • Google Gemini
  • OpenRouter
  • DeepSeek
  • Mistral
  • xAI
  • Groq
  • Together
  • Moonshot
  • Qwen / DashScope

OpenAI-compatible providers reuse the openai Python client under the hood.

Install

Install the base package plus the provider SDKs you actually want to use:

pip install aistatus
pip install aistatus[anthropic]
pip install aistatus[openai]
pip install aistatus[google]
pip install aistatus[all]

Notes:

  • aistatus[openai] also covers OpenAI-compatible providers such as OpenRouter, DeepSeek, Mistral, xAI, Groq, Together, Moonshot, and Qwen.
  • The base package includes the router and status API client.
  • Provider extras install the vendor SDKs used for actual model calls.

Quickstart

Set at least one provider API key, then route by model name:

from aistatus import route

resp = route(
    "Summarize the latest deployment status.",
    model="claude-sonnet-4-6",
)

print(resp.content)
print(resp.model_used)
print(resp.provider_used)
print(resp.was_fallback)
print(resp.fallback_reason)

If the primary provider is unavailable, aistatus will try compatible providers that are both healthy and configured in your environment.

Why This Helps Agents And Coding CLIs

For simple scripts, a retry loop may be enough. For agents and coding tools, it usually is not.

  • An agent often chains planning, retrieval, synthesis, and repair into one run.
  • A coding CLI may need several model calls for diagnosis, patch generation, test-fix loops, and final explanation.
  • When those systems depend on one provider, a brief outage can break the whole interaction.

aistatus improves stability by checking route health before the call and falling back automatically when the preferred route is not available. That gives you a more resilient default for production agents, internal coding tools, and developer-facing CLIs.

Tier Routing

Tier routing is explicit and predictable: you define ordered model groups and let the router try them in sequence.

from aistatus import Router

router = Router(check_timeout=2.0)
router.add_tier("fast", [
    "claude-haiku-4-5",
    "gpt-4o-mini",
    "gemini-2.0-flash",
])
router.add_tier("standard", [
    "claude-sonnet-4-6",
    "gpt-4o",
    "gemini-2.5-pro",
])

resp = router.route(
    "Explain quantum computing in one sentence.",
    tier="fast",
)

This is a good fit when you want stable behavioral buckets such as fast, standard, or premium, without hard-coding one vendor per workflow step.

Agent Pipeline Example

aistatus is especially useful for multi-step agents. A simple pattern is:

from aistatus import route

plan = route(
    "How is embodied AI changing manufacturing?",
    model="claude-haiku-4-5",
    system="Break the topic into 3 research sub-questions. Be concise.",
)

answer = route(
    plan.content,
    model="claude-sonnet-4-6",
    prefer=["anthropic", "google"],
)

See examples/agent_pipeline.py for a full multi-step example that uses different model tiers for planning, research, and synthesis.

Manual Provider Registration

You can register custom providers directly when auto-discovery is not enough. This is useful for self-hosted gateways or OpenAI-compatible endpoints.

from aistatus import ProviderConfig, Router

router = Router(auto_discover=False)
router.register_provider(
    ProviderConfig(
        slug="local-vllm",
        adapter_type="openai",
        api_key="dummy",
        base_url="http://localhost:8000/v1",
    )
)

resp = router.route("Hello", model="gpt-4o-mini")

Async

from aistatus import aroute

resp = await aroute(
    [{"role": "user", "content": "Hello"}],
    model="gpt-4o-mini",
)

Status API

You can also query aistatus.cc directly without sending any model request:

from aistatus import StatusAPI

api = StatusAPI()

check = api.check_provider("anthropic")
print(check.status)
print(check.is_available)

for provider in api.providers():
    print(provider.name, provider.status.value)

for model in api.search_models("sonnet"):
    print(model.id, model.prompt_price, model.completion_price)

This is useful for dashboards, health checks, pre-deployment validation, or building your own routing policy on top of the status data.

Response Object

Every route() call returns a RouteResponse:

@dataclass
class RouteResponse:
    content: str
    model_used: str
    provider_used: str
    was_fallback: bool
    fallback_reason: str | None = None
    input_tokens: int = 0
    output_tokens: int = 0
    cost_usd: float = 0.0
    raw: Any = None

The routing metadata makes it easy to log fallback events and understand how stable your agent or CLI is in real traffic.

Errors

from aistatus import AllProvidersDown, ProviderNotInstalled, route

try:
    resp = route("Hello", model="claude-sonnet-4-6")
except AllProvidersDown as e:
    print(e.tried)
except ProviderNotInstalled as e:
    print(f"Install support for: {e.provider}")

Common failure modes:

  • AllProvidersDown: no configured provider could successfully serve the call
  • ProviderNotInstalled: the required provider SDK extra is missing
  • ProviderCallFailed: the selected provider failed and fallback was disabled

Environment Variables

The router auto-discovers providers from standard environment variables:

ANTHROPIC_API_KEY=...
OPENAI_API_KEY=...
GEMINI_API_KEY=...
OPENROUTER_API_KEY=...
DEEPSEEK_API_KEY=...
MISTRAL_API_KEY=...
XAI_API_KEY=...
GROQ_API_KEY=...
TOGETHER_API_KEY=...
MOONSHOT_API_KEY=...
DASHSCOPE_API_KEY=...

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aistatus-0.0.2.tar.gz (28.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aistatus-0.0.2-py3-none-any.whl (32.2 kB view details)

Uploaded Python 3

File details

Details for the file aistatus-0.0.2.tar.gz.

File metadata

  • Download URL: aistatus-0.0.2.tar.gz
  • Upload date:
  • Size: 28.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aistatus-0.0.2.tar.gz
Algorithm Hash digest
SHA256 96e4c62b0555dcedd3e742f540ea89e642da3611137af0ef664a597feada7bd9
MD5 3e5f434392626e62f1afa7ffca65e254
BLAKE2b-256 d26635e679d65ed308fdce21eec600922094c7f87cd689191dffb8067b002969

See more details on using hashes here.

Provenance

The following attestation bundles were made for aistatus-0.0.2.tar.gz:

Publisher: pypi-publish.yml on fangxm233/aistatus-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aistatus-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: aistatus-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 32.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aistatus-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3577ada253366eac773d29dcce8599f25f2ee2adedd4d718138de45ebc8b419f
MD5 d1faa9759b5bc88d9941b81fb6c1ab55
BLAKE2b-256 c36be440f7b4cbe1e5adc97920c28674181ebe8cad2eee0a6286b8d55c2d38b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for aistatus-0.0.2-py3-none-any.whl:

Publisher: pypi-publish.yml on fangxm233/aistatus-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page