Skip to main content

Python SDK for TokenRouter - Intelligent LLM Routing API

Project description

TokenRouter Python SDK

Official Python SDK for TokenRouter — an intelligent LLM router that provides OpenAI‑compatible endpoints and a native routing endpoint.

This README focuses on the routing interfaces you’ll use today:

  • client.create(...) → Native routing endpoint (/route)
  • client.chat.completions.create(...) → OpenAI chat completions (/v1/chat/completions)
  • client.completions.create(...) → OpenAI legacy text completions (/v1/completions)

All calls are BYOK. Provide your TokenRouter API key, and configure provider keys in TokenRouter.

Installation

pip install tokenrouter

Quick Start (Native Route)

from tokenrouter import TokenRouter

client = TokenRouter(
    api_key="tr_...",
    base_url="http://localhost:8000"  # or https://api.tokenrouter.io
)

response = client.create(
  model="auto",
  mode="balanced",
  model_preferences=["gpt-4o", "gpt-4o-mini"],
  messages=[
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  # Optional (native route only): select key behavior
  # inline|stored|mixed|auto (default)
  key_mode="auto",
)

print(response.choices[0].message.content)

Endpoints

Native Route (/route)

OpenAI‑like request/response shape plus TokenRouter metadata: cost_usd, latency_ms, routed_model, routed_provider, service_tier, etc.

Non‑streaming

response = client.create(
  model="auto",
  mode="balanced",
  model_preferences=["gpt-4o", "gpt-4o-mini"],
  messages=[
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
)
print(response.choices[0].message.content)

Streaming

for chunk in client.create(
  model="auto",
  stream=True,
  messages=[
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Stream a short greeting."}
  ],
):
  delta = (chunk.choices[0].get("delta", {}) if chunk.choices else {})
  if delta.get("content"):
    print(delta["content"], end="")

Chat Completions (/v1/chat/completions)

OpenAI‑compatible chat completions.

Non‑streaming

response = client.chat.completions.create(
  model="auto",
  mode="balanced",
  model_preferences=["gpt-4o", "gpt-4o-mini"],
  messages=[
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
)
print(response.choices[0].message.content)

Streaming

for chunk in client.chat.completions.create(
  model="auto",
  stream=True,
  messages=[
    {"role": "developer", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
):
  delta = (chunk.choices[0].get("delta", {}) if chunk.choices else {})
  if delta.get("content"):
    print(delta["content"], end="")

Legacy Completions (/v1/completions)

OpenAI legacy text completion format. The SDK returns the raw OpenAI‑style dict.

Non‑streaming

resp = client.completions.create(
  model="auto",
  prompt="Say this is a test",
  mode="balanced",
)
print(resp["choices"][0]["text"])  # text completion shape

Streaming

for chunk in client.completions.create(
  model="auto",
  prompt="Stream this as text",
  stream=True,
):
  if chunk.get("choices"):
    print(chunk["choices"][0].get("text", ""), end="")

Errors

from tokenrouter import AuthenticationError, RateLimitError, InvalidRequestError, APIConnectionError

try:
  response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello"}],
    model="auto"
  )
  print(response.choices[0].message.content)
except RateLimitError as e:
  print(f"Rate limited, retry after: {e.retry_after}s")
except AuthenticationError:
  print("Invalid API key")
except InvalidRequestError as e:
  print(f"Invalid request: {e}")
except APIConnectionError as e:
  print(f"Connection error: {e}")

Environment

export TOKENROUTER_API_KEY=tr_your-api-key
# Optional
export TOKENROUTER_BASE_URL=https://api.tokenrouter.io

# Optional provider keys (auto-detected for inline encryption on native /route only)
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...
export MISTRAL_API_KEY=...
export DEEPSEEK_API_KEY=...
export META_API_KEY=...

# When `key_mode` is `inline`, `mixed`, or `auto` (native `/route` only), the SDK:
# - Auto-loads provider keys from your environment or local `.env` (dev/CI) with the names above
# - Encrypts keys client-side using the API's published public key (fetched from `/.well-known/tr-public-key`)
# - Sends the encrypted bundle in the `X-TR-Provider-Keys` header (not in JSON)
# - Never persists or logs provider secrets

# Note: `key_mode` is not available on the OpenAI-compatible endpoints (`/v1/chat/completions`, `/v1/completions`).

Using OpenAI SDK against TokenRouter

from openai import OpenAI
client = OpenAI(api_key="sk_...", base_url="https://api.tokenrouter.io/v1")
response = client.chat.completions.create(
  model="auto",
  messages=[{"role": "user", "content": "Hello"}],
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokenrouter-1.0.11.tar.gz (12.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokenrouter-1.0.11-py3-none-any.whl (11.5 kB view details)

Uploaded Python 3

File details

Details for the file tokenrouter-1.0.11.tar.gz.

File metadata

  • Download URL: tokenrouter-1.0.11.tar.gz
  • Upload date:
  • Size: 12.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for tokenrouter-1.0.11.tar.gz
Algorithm Hash digest
SHA256 c9edc8a06cb641993a9f035dfd8f6316f7a0cb1a5a0ca30ff08d897b0a04b22b
MD5 019a45ba1ce3ed3f6befa5978e0455c1
BLAKE2b-256 2dcfa5edb2b37cd5a094f207ccc98573ffba4bf40f4ff6498e200da69223bb77

See more details on using hashes here.

File details

Details for the file tokenrouter-1.0.11-py3-none-any.whl.

File metadata

  • Download URL: tokenrouter-1.0.11-py3-none-any.whl
  • Upload date:
  • Size: 11.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for tokenrouter-1.0.11-py3-none-any.whl
Algorithm Hash digest
SHA256 4fdfe09f4557945daf204444dd9d9930bc236819035307b1607893ac0201680a
MD5 e8217723f314e1f1d01aae44cab7ee29
BLAKE2b-256 c952719c833471257f9988311fd93b1f7d8281477fed9aa4dce989dab2f95da6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page