Skip to main content

Python SDK for TokenRouter - Intelligent LLM Routing API

Project description

TokenRouter Python SDK

OpenAI Responses API compatible client for TokenRouter - intelligent LLM routing service.

Installation

pip install tokenrouter

Quick Start

import os
from tokenrouter import TokenRouter

client = TokenRouter(
    api_key=os.environ.get('TOKENROUTER_API_KEY'),  # This is the default and can be omitted
    base_url=os.environ.get('TOKENROUTER_BASE_URL'),  # Default: https://api.tokenrouter.io/api
)

response = client.responses.create(
    model="gpt-4.1",
    input="Tell me a three sentence bedtime story about a unicorn."
)

print(response.output_text)

OpenAI Compatibility

This SDK is designed to be a drop-in replacement for OpenAI's SDK when using the Responses API. Simply change your import and API key:

# Before (OpenAI)
from openai import OpenAI
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# After (TokenRouter)
from tokenrouter import TokenRouter
client = TokenRouter(api_key=os.environ.get("TOKENROUTER_API_KEY"))

Configuration

Environment Variables

  • TOKENROUTER_API_KEY - Your TokenRouter API key
  • TOKENROUTER_BASE_URL - API base URL (default: https://api.tokenrouter.io/api)

Client Options

from tokenrouter import TokenRouter

client = TokenRouter(
    api_key='tr_...',  # Your API key
    base_url='https://api.tokenrouter.io/api',  # API base URL
    timeout=60.0,  # Request timeout in seconds (default: 60)
    max_retries=3,  # Max retry attempts (default: 3)
    headers={  # Additional headers
        'X-Custom-Header': 'value'
    }
)

API Reference

Create Response

response = client.responses.create(
    # Required
    input="Your prompt here",  # or list of input items

    # Optional
    model="gpt-4.1",  # Model to use
    instructions="System instructions",
    max_output_tokens=1000,
    temperature=0.7,
    top_p=0.9,
    stream=False,  # Set to True for streaming
    tools=[],  # Function calling tools
    tool_choice="auto",
    text={"format": {"type": "text"}},  # Response format
    # ... other OpenAI-compatible parameters
)

# Access the response text directly
print(response.output_text)

Streaming Responses

stream = client.responses.create(
    input="Write a poem",
    stream=True
)

for event in stream:
    if event.type == "response.delta" and event.delta and event.delta.output:
        for item in event.delta.output:
            if item.get("content"):
                for content in item["content"]:
                    if content.get("text"):
                        print(content["text"], end="", flush=True)

Function Calling

response = client.responses.create(
    input="What's the weather in San Francisco?",
    tools=[
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string"}
                    },
                    "required": ["location"]
                }
            }
        }
    ]
)

# Check for function calls in the response
for item in response.output:
    if item.type == "tool_call" and item.tool_calls:
        for tool_call in item.tool_calls:
            if tool_call.function:
                print(f"Function: {tool_call.function.get('name')}")
                print(f"Arguments: {tool_call.function.get('arguments')}")

Multi-turn Conversations

# First message
response1 = client.responses.create(
    input="My name is Alice",
    store=True  # Store for later retrieval
)

# Continue conversation
response2 = client.responses.create(
    input="What's my name?",
    previous_response_id=response1.id
)

Other Methods

# Get response by ID
response = client.responses.get("resp_123")

# Delete response
result = client.responses.delete("resp_123")

# Cancel background response
response = client.responses.cancel("resp_123")

# List input items
items = client.responses.list_input_items("resp_123")

Async Support

The SDK provides a fully async client for asynchronous applications:

import asyncio
from tokenrouter import AsyncTokenRouter

async def main():
    async with AsyncTokenRouter(api_key="tr_...") as client:
        response = await client.responses.create(
            input="Hello, world!"
        )
        print(response.output_text)

asyncio.run(main())

Async Streaming

async with AsyncTokenRouter(api_key="tr_...") as client:
    stream = await client.responses.create(
        input="Count to 5",
        stream=True
    )

    async for event in stream:
        if event.type == "response.delta" and event.delta:
            # Process streaming chunks
            pass

Response Format

The SDK adds a convenience property output_text to responses that aggregates all text output:

response = client.responses.create(input="Hello")

# Access aggregated text directly
print(response.output_text)

# Or access the full response structure
print(response.output)  # List of output items
print(response.usage)  # Token usage
print(response.model)  # Model used

Error Handling

from tokenrouter import (
    TokenRouterError,
    AuthenticationError,
    RateLimitError,
    InvalidRequestError
)

try:
    response = client.responses.create(input="Hello")
except AuthenticationError:
    print("Invalid API key")
except RateLimitError as e:
    print(f"Rate limit exceeded, retry after: {e.retry_after}")
except InvalidRequestError as e:
    print(f"Invalid request: {e.message}")
except TokenRouterError as e:
    print(f"Unexpected error: {e}")

Type Hints

The SDK provides comprehensive type hints for all models:

from tokenrouter import TokenRouter, Response, ResponseStreamEvent
from typing import Iterator

def process_response(response: Response) -> str:
    return response.output_text or ""

def handle_stream(stream: Iterator[ResponseStreamEvent]) -> None:
    for event in stream:
        # Process events with full type support
        pass

Examples

See the examples directory for more detailed usage:

Requirements

  • Python 3.7+
  • httpx>=0.24.0
  • typing-extensions>=4.0.0

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokenrouter-1.0.12.tar.gz (14.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokenrouter-1.0.12-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file tokenrouter-1.0.12.tar.gz.

File metadata

  • Download URL: tokenrouter-1.0.12.tar.gz
  • Upload date:
  • Size: 14.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for tokenrouter-1.0.12.tar.gz
Algorithm Hash digest
SHA256 9c3f6c0cede1f17084f9c2f51302fda831838c86b91fd53766ea1ee0a43ebb6b
MD5 01758b56e34d3c822adf50c131c09c0a
BLAKE2b-256 99146d47ad9fdeb8d30a612b8742396d7e5dc93c40d75b450043255a15314b0c

See more details on using hashes here.

File details

Details for the file tokenrouter-1.0.12-py3-none-any.whl.

File metadata

  • Download URL: tokenrouter-1.0.12-py3-none-any.whl
  • Upload date:
  • Size: 12.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for tokenrouter-1.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 a1076d73cd7defa09c9a5969dcb5882c4ce55a0a4f781e6dab48905452a9a9c5
MD5 73331965813198c334b2059ec1d9b4dd
BLAKE2b-256 7c7de12fefa2b9ac9103ef056c6082a948b95ad3811eabf144a6ea8f86c7e006

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page