Skip to main content

Kitty Bridge — launch coding agents through a local API bridge

Project description

kitty-bridge

Use your favorite coding agent with any LLM provider.

Claude Code with MiniMax. Codex with GLM. Gemini CLI with OpenRouter. One command.

pip install kitty-bridge

Quick Start

1. Install

pip install kitty-bridge

Requires Python 3.10+.

2. Set up a profile

kitty setup

An interactive wizard walks you through picking a provider, a model, and entering your API key. Takes 30 seconds.

3. Launch your agent

kitty claude      # Claude Code → your provider
kitty codex       # Codex CLI → your provider
kitty gemini      # Gemini CLI → your provider
kitty kilo        # Kilo Code → your provider

That's it. Your coding agent now talks to the LLM you chose — not the one it was built for.

Example: Use GLM with Claude Code

$ pip install kitty-bridge
$ kitty setup
  ? Provider: openai
  ? Model: openai/gpt-5.4-pro
  ? API key: ********

$ kitty claude
   Bridge running on port <random_port>
   Claude Code launched
  > Hello! How can I help you today?

Balanced Profiles

A balanced profile combines multiple providers into one. Each request is sent to a randomly chosen healthy provider. If one provider goes down, the others pick up the slack automatically.

Why use it:

  • Cost savings — spread requests across cheaper providers
  • Rate limit resilience — never hit a single provider's limit
  • Fault tolerance — if one provider is down, the others keep working

How to create one:

kitty profile
# → "Create balancing profile" → select 2+ member profiles

Example: Combine MiniMax, Novita, and Z.AI into one balanced profile called my-pool, then use it with any agent:

kitty my-pool claude
kitty my-pool codex

When you run this, each request goes to a random healthy member. If MiniMax returns an error, kitty silently retries on Novita or Z.AI — your agent never sees the failure.

Bridge Mode

Bridge mode starts a standalone OpenAI-compatible API server on your machine. Use it when you want to connect tools that speak the OpenAI API — IDEs, custom scripts, anything that accepts a base URL.

kitty bridge          # use default profile
kitty my-profile bridge   # use a specific profile

Point your tool at http://localhost:<port> and it just works.

Available endpoints:

Endpoint Protocol Used by
POST /v1/chat/completions Chat Completions General purpose
POST /v1/messages Anthropic Messages Claude Code
POST /v1/responses OpenAI Responses Codex
POST /v1/gemini/generateContent Gemini Gemini CLI
GET /healthz Health check Monitoring

Supported Agents

Agent Command What it is
Claude Code kitty claude Anthropic's coding agent
Codex CLI kitty codex OpenAI's coding agent
Gemini CLI kitty gemini Google's coding agent
Kilo Code kitty kilo Open-source coding agent

Supported Providers

Provider Type ID Notes
OpenAI openai
Anthropic anthropic Direct Anthropic Messages API
OpenRouter openrouter Multi-provider router
MiniMax minimax
Novita novita
Z.AI (regular) zai_regular General-purpose endpoint
Z.AI (coding) zai_coding Coding-optimized endpoint
Fireworks fireworks
Ollama ollama Local LLM deployment
OpenCode opencode_go
AWS Bedrock bedrock Uses boto3 SigV4 auth
Azure OpenAI azure Requires deployment name
Google Vertex AI vertex Requires project and location

Commands

Command Description
kitty setup Create your first profile (interactive wizard)
kitty profile Manage profiles (create, delete, set default, list)
kitty doctor Diagnose installation issues
kitty cleanup Restore agent config files after a crash
kitty bridge Start a standalone API server
kitty claude Launch Claude Code with default profile
kitty codex Launch Codex with default profile
kitty gemini Launch Gemini CLI with default profile
kitty kilo Launch Kilo Code with default profile
kitty <profile> <agent> Launch an agent with a specific profile
kitty <profile> bridge Start bridge with a specific profile
kitty --no-validate <profile> <agent> Skip API key validation
kitty --version Print version
kitty --help Print help

Technical Details

How it works

kitty sits between your coding agent and the upstream LLM provider:

Agent (Claude Code / Codex / Gemini / Kilo) → kitty bridge → upstream provider

When you run kitty claude:

  1. kitty reads your profile (provider, model, API key)
  2. Starts a local HTTP bridge on a random port
  3. Configures the agent to send requests to the bridge instead of its default endpoint
  4. The bridge translates each request to the provider's format and forwards it
  5. Responses are translated back to the agent's native format
  6. When the agent exits, kitty restores the agent's config files

Profiles

A profile binds a provider, model, and API key together. Stored in ~/.config/kitty/profiles.json.

kitty setup        # create a profile interactively
kitty profile      # manage existing profiles
kitty my-profile claude  # use a specific profile

Profile names must be 1-32 characters, lowercase letters, numbers, dashes, or underscores. Reserved words like setup, claude, codex, gemini, kilo, profile, bridge cannot be used as profile names.

Pre-flight validation

Before launching, kitty validates your API key with a lightweight test request. If the key is invalid, you get a clear error immediately — not a cryptic failure inside the agent.

kitty --no-validate my-profile claude  # skip validation (e.g. air-gapped/offline environments)

Cleanup

kitty restores agent config files after the agent exits. Three layers of cleanup:

  1. Normal exitfinally block
  2. Crash / SIGTERMatexit handler
  3. SIGKILL / kernel OOM — run kitty cleanup manually

Project structure

src/kitty/
├── bridge/          # HTTP bridge + protocol translation
├── cli/             # Command-line interface
├── credentials/     # API key storage
├── launchers/       # Agent-specific adapters
├── profiles/        # Profile management
├── providers/       # Upstream provider adapters
├── tui/             # Terminal UI components
└── types.py         # Shared types

FAQ

"API Error: Unable to connect to API (ConnectionRefused)"

The agent is trying to connect to a bridge that isn't running. Usually caused by a stale config from a previous crashed session:

kitty cleanup

"API Error: 401" or "token expired or incorrect"

Your API key has expired or been revoked. Run setup again:

kitty setup

"Prompt exceeds max length" (Z.AI error 1261)

The conversation has grown beyond the model's context window. Use /clear in the agent to reset.

Can I use kitty with Cursor, Windsurf, or other IDEs?

Yes — start a bridge, then point your IDE's "OpenAI base URL" setting at http://localhost:<port>:

kitty bridge
# Then configure your IDE to use http://localhost:<port>/v1/chat/completions

Can I run a local model?

Yes. Install Ollama, pull a model, then create a profile with provider ollama:

kitty setup
# Provider: ollama
# Base URL: http://localhost:11434/v1
# Model: llama3

Development

pip install -e ".[dev]"
pytest
ruff check .
mypy src/kitty

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kitty_bridge-0.2.0.tar.gz (200.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kitty_bridge-0.2.0-py3-none-any.whl (125.0 kB view details)

Uploaded Python 3

File details

Details for the file kitty_bridge-0.2.0.tar.gz.

File metadata

  • Download URL: kitty_bridge-0.2.0.tar.gz
  • Upload date:
  • Size: 200.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kitty_bridge-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9af5a772e21ed26c754ece63c0773a9049ba7f5cef5bc4aa9b6c15a2a4243eb8
MD5 59f81e13bdd78b9fd9836e51c36af6cd
BLAKE2b-256 053fc8a6cd2dc8cdb9a99efe9a0207e1ba47fce3e41ae0748e5e4ffad9d60a3b

See more details on using hashes here.

Provenance

The following attestation bundles were made for kitty_bridge-0.2.0.tar.gz:

Publisher: publish.yml on Shelpuk-AI-Technology-Consulting/kitty-bridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kitty_bridge-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: kitty_bridge-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 125.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kitty_bridge-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1fbe417cc64c47529e7091d848b2b90fcc67be16298415da1438f17047e78559
MD5 468f0c23842e165ebee005ca1662e22b
BLAKE2b-256 caaa61204a7e5f9ac2c210686b45f48e2cdcd7c6cd33fd6724e1b51141674876

See more details on using hashes here.

Provenance

The following attestation bundles were made for kitty_bridge-0.2.0-py3-none-any.whl:

Publisher: publish.yml on Shelpuk-AI-Technology-Consulting/kitty-bridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page