Skip to main content

Bridge OpenAI tools to Claude Code SDK

Project description

claudebridge

claudebridge

CI PyPI Python 3.12+ License: MIT GitHub stars

🌉 Bridge OpenAI tools to Claude Code SDK — use your subscription anywhere 🔌

⚠️ Legal Notice: This tool bridges OpenAI-compatible clients to Claude using the Claude Code SDK. The permissibility of this usage under Anthropic's Terms of Service is unclear. Use at your own peril. Please review the Legal Disclaimer section before use.

Why claudebridge?

You have a Claude subscription. You have tools that speak OpenAI's API. claudebridge connects them — no API keys, no extra costs.

claudebridge LiteLLM Direct API
Uses your Claude subscription Yes No No
No API key needed Yes No No
One command to start Yes ~Yes No
OpenAI-compatible Yes Yes No

Features

  • Lightweight — Minimal dependencies, easy to understand
  • OpenAI-compatible — Drop-in replacement for /api/v1/chat/completions
  • Uses your subscription — No API keys needed, uses Claude Code OAuth
  • Streaming support — Real-time SSE responses matching OpenAI format
  • Connection pooling — Pre-spawned clients for reduced latency
  • Session logging — Full request/response logging for debugging

Quick Start

# Install from PyPI
uv tool install py-claudebridge

# Or install via Homebrew (macOS)
brew install tsilva/tap/claudebridge

# Run the server
claudebridge

# Verify installed version
claudebridge --version
Alternative: Install from source
git clone https://github.com/tsilva/claudebridge
cd claudebridge
uv pip install -e .

Local Development

When reinstalling from source, use --no-cache to ensure you get the latest code:

uv tool install . --force --no-cache

Server starts at http://localhost:8082

Usage

With curl

# Non-streaming
curl -X POST http://localhost:8082/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "sonnet", "messages": [{"role": "user", "content": "Hello!"}]}'

# Streaming
curl -X POST http://localhost:8082/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "sonnet", "messages": [{"role": "user", "content": "Hello!"}], "stream": true}'

With OpenAI Python Client

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8082/api/v1",
    api_key="not-needed"
)

response = client.chat.completions.create(
    model="sonnet",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Streaming

stream = client.chat.completions.create(
    model="sonnet",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

CLI Client

The CLI client can be used for ad-hoc testing:

# Direct prompt
python -m claudebridge.client "What is Python?"

# Pipe from stdin
echo "Hello" | python -m claudebridge.client

# Use different model
python -m claudebridge.client --model opus "Explain decorators"

# Non-streaming mode
python -m claudebridge.client --no-stream "Quick answer"

# Multiple parallel requests
python -m claudebridge.client -n 3 "Hello"

BridgeClient Library

Use BridgeClient programmatically for testing or integration:

from claudebridge.client import BridgeClient

# Sync usage
with BridgeClient() as client:
    if client.health_check():
        models = client.list_models()
        response = client.complete_sync("Hello!", stream=False)
        print(response)

# Async usage
import asyncio

async def main():
    async with BridgeClient() as client:
        response = await client.complete("Hello!")
        print(response)

        # Or stream chunks
        async for chunk in client.stream("Tell me a story"):
            print(chunk, end="")

asyncio.run(main())

Testing

# Install test dependencies
uv pip install -e ".[test]"

# Run the test suite (requires server running)
uv run pytest tests/test_client.py -v
Compatible Tools
  • Cursor — Use Claude through Cursor's OpenAI-compatible backend
  • Continue.dev — VS Code extension with OpenAI endpoint support
  • Open WebUI — Self-hosted ChatGPT-like interface
  • LangChain / LlamaIndex — Via OpenAI provider
  • Any OpenAI SDK client — Python, TypeScript, Go, etc.

Available Models

Model ID Description
opus Claude Opus (most capable)
sonnet Claude Sonnet (balanced)
haiku Claude Haiku (fastest)

Also accepts: claude-opus, claude-sonnet, claude-haiku, claude-3-sonnet, claude-3.5-sonnet, etc.

API Endpoints

Endpoint Method Description
/api/v1/chat/completions POST Chat completions (OpenAI format)
/api/v1/models GET List available models
/health GET Health check

Configuration

The bridge uses your existing Claude Code authentication:

claude login

Environment Variables

Variable Default Description
PORT 8082 Server port
POOL_SIZE 3 Number of workers (also settable via --workers/-w flag)
CLAUDE_TIMEOUT 120 Request timeout in seconds

Client Environment Variables

Variable Default Description
BRIDGE_URL http://localhost:8082 API base URL
OPENROUTER_API_KEY - API key (if needed)
OPENROUTER_MODEL default Default model

Architecture

claudebridge/
├── server.py         # FastAPI app, endpoints, Claude SDK integration
├── pool.py           # Client pool for connection reuse
├── models.py         # Pydantic models for OpenAI request/response format
├── client.py         # BridgeClient library + CLI
└── session_logger.py # Request/response logging

Requirements

  • Python 3.10+
  • Active Claude Code subscription
  • Claude Code CLI authenticated (claude login)

Legal Disclaimer

⚠️ Use at Your Own Peril

This tool (claudebridge) creates a bridge between OpenAI-compatible clients and Claude using the Claude Code SDK. The permissibility of this usage under Anthropic's Terms of Service is not clearly defined and subject to interpretation.

The Ambiguity

Anthropic's terms contain provisions that may affect this usage:

  • Commercial Terms: Prohibit building competing products, reverse engineering, or reselling services
  • Consumer Terms: Restrict automated/non-human access except via approved APIs
  • Both: Prohibit developing competing products or training competing AI models

This tool:

  • Uses the Claude Code SDK (not the official Anthropic API)
  • Enables programmatic access to Claude through your existing subscription
  • Could be interpreted as "automated access" or a "competing product"

Our Interpretation

We believe that for lightweight personal local usage, this should fall within acceptable use since:

  • It requires an active Claude subscription
  • It uses the official Claude Code SDK
  • It's for personal/local development purposes only
  • It doesn't compete with or replace Anthropic's services

However, this interpretation is disputable and may not align with Anthropic's view.

Guidelines for Conservative Use

Even if you conclude this usage is legitimate, act conservatively:

  • For yourself only — Do not share access, create multi-user services, or allow others to use your instance
  • Stay local — Run only on your personal machine, not on servers or cloud infrastructure
  • Minimal usage — Use sparingly and only for genuine personal development needs
  • No automation — Do not build automated pipelines, bots, or services on top of this tool
  • No redistribution — Do not package or distribute this as part of other products
  • Stay within boundaries — Respect rate limits and avoid any behavior that could be seen as abuse

Your Responsibility

By using this tool, you acknowledge that:

  1. You have read and understand Anthropic's Terms of Service
  2. You accept that this usage may violate those terms
  3. You use this tool entirely at your own peril
  4. You are solely responsible for any consequences of use
  5. We assume no liability for your use of this tool

We strongly encourage you to:

  • Review Anthropic's terms yourself
  • Make your own determination about permissibility
  • Err on the side of caution
  • Contact Anthropic if you need clarification
  • Discontinue use if you have any concerns

This project is provided as-is for educational purposes only.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_claudebridge-0.1.8.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_claudebridge-0.1.8-py3-none-any.whl (24.5 kB view details)

Uploaded Python 3

File details

Details for the file py_claudebridge-0.1.8.tar.gz.

File metadata

  • Download URL: py_claudebridge-0.1.8.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for py_claudebridge-0.1.8.tar.gz
Algorithm Hash digest
SHA256 75398580a1f132aac171b93118c3a1588551afdc238eaa50b16b53de5534c75b
MD5 ff84768e77446473c6ae89f0cd53debe
BLAKE2b-256 45ee8af67abb2d9010aec21b463ad5cbc3bbf50c23add6507352372eaeb20faa

See more details on using hashes here.

Provenance

The following attestation bundles were made for py_claudebridge-0.1.8.tar.gz:

Publisher: release.yml on tsilva/claudebridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file py_claudebridge-0.1.8-py3-none-any.whl.

File metadata

File hashes

Hashes for py_claudebridge-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 808537a29006c595c7222e28c64c28b7da92acb714ce9492b529e3ddd4ae4d64
MD5 fe26f806ff1c0811742ea7641825b8cc
BLAKE2b-256 99349137833db91d99d5977776f2251dea90bdfe3f9d6fa96ad2ae45dbe389ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for py_claudebridge-0.1.8-py3-none-any.whl:

Publisher: release.yml on tsilva/claudebridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page