A Python library for chatting with AI models through OpenRouter
Project description
🤖 OpRouter - OpenRouter SDK Client for Python
A Python library for chatting with AI models through OpenRouter. Simple to use, reliable, and feature-rich.
What is OpRouter?
OpRouter makes it easy to build AI-powered applications by providing:
- Simple API: Chat with AI in just a few lines of code
- Reliable: Automatic retries when things go wrong
- Smart: Manages conversations and tracks usage
- Interactive: Beautiful command-line chat interface
Perfect for developers who want to integrate AI chat into their projects without dealing with API complexities.
✨ Key Features
- Easy Integration: Start chatting with AI in 3 lines of code
- Automatic Retries: Handles rate limits and network issues automatically
- Conversation Memory: Save and resume conversations
- Multiple Models: Works with any OpenRouter-supported AI model
- Token Tracking: Monitor usage and costs
- Streaming Responses: Get responses as they're generated
- Interactive CLI: Ready-to-use chat interface
🚀 Quick Start
Installation
Option 1: Install from PyPI (recommended)
pip install oprouter
Option 2: Install from source (for development)
git clone https://github.com/DedInc/oprouter.git
cd oprouter
pip install -e .
Get Your API Key
- Sign up at OpenRouter
- Get your API key from the dashboard
- Set it as an environment variable:
export OPENROUTER_API_KEY="your_api_key_here"
Start Chatting
Option 1: Interactive CLI (after installation)
# If installed with pip
oprouter
# Or using Python module
python -m oprouter
Option 2: Development mode
python main.py
Option 3: Use in Your Code
import asyncio
from oprouter import OpenRouterClient
async def chat():
async with OpenRouterClient() as client:
response = await client.chat_completion([
{"role": "user", "content": "Hello!"}
])
print(response.data['choices'][0]['message']['content'])
asyncio.run(chat())
📚 Usage Examples
1. Simple Chat (3 lines!)
import asyncio
from oprouter import OpenRouterClient
async def simple_chat():
async with OpenRouterClient() as client:
response = await client.chat_completion([
{"role": "user", "content": "Explain Python in one sentence"}
])
print(response.data['choices'][0]['message']['content'])
asyncio.run(simple_chat())
2. Conversation with Memory
import asyncio
from oprouter import OpenRouterClient
from oprouter.storage import Conversation, MessageRole
async def conversation_example():
async with OpenRouterClient() as client:
# Create a conversation that remembers context
conversation = Conversation(title="My Chat")
# Add messages and get responses
messages = ["Hi, I'm learning Python", "What should I learn first?"]
for user_msg in messages:
conversation.add_message(MessageRole.USER, user_msg)
# Get conversation context for API
context = conversation.get_context_window()
response = await client.chat_completion(context)
if response.success:
ai_msg = response.data['choices'][0]['message']['content']
conversation.add_message(MessageRole.ASSISTANT, ai_msg)
print(f"You: {user_msg}")
print(f"AI: {ai_msg}\n")
# Save conversation
conversation.save()
print(f"Conversation saved with ID: {conversation.id}")
asyncio.run(conversation_example())
3. Streaming Responses
import asyncio
from oprouter import OpenRouterClient
async def streaming_example():
async with OpenRouterClient() as client:
messages = [{"role": "user", "content": "Write a short poem"}]
print("AI: ", end="", flush=True)
async for chunk in client.chat_completion_stream(messages):
print(chunk, end="", flush=True)
print() # New line
asyncio.run(streaming_example())
4. Different Models
import asyncio
from oprouter import OpenRouterClient
async def model_example():
# Use a specific model
async with OpenRouterClient(model="anthropic/claude-3-haiku") as client:
response = await client.chat_completion([
{"role": "user", "content": "Hello from Claude!"}
])
print(response.data['choices'][0]['message']['content'])
asyncio.run(model_example())
5. Error Handling
import asyncio
from oprouter import OpenRouterClient
async def safe_chat():
async with OpenRouterClient() as client:
response = await client.chat_completion([
{"role": "user", "content": "Hello!"}
])
if response.success:
print("AI:", response.data['choices'][0]['message']['content'])
else:
print("Error:", response.error)
asyncio.run(safe_chat())
📖 CLI Commands
When using python main.py, you get an interactive chat with these commands:
| Command | Description |
|---|---|
/help |
Show help message |
/new [title] |
Start new conversation |
/list |
List all conversations |
/load <id> |
Load conversation by ID |
/save |
Save current conversation |
/export |
Export conversation to text |
/clear |
Clear current conversation |
/title <title> |
Set conversation title |
/model <model> |
Change AI model |
/models |
List available models |
/stats |
Show conversation statistics |
/quit |
Exit application |
⚙️ Configuration
Environment Variables
# Required
OPENROUTER_API_KEY=your_api_key_here
# Optional - customize behavior
DEFAULT_MODEL=anthropic/claude-3-haiku # Which AI model to use
MAX_REQUESTS_PER_MINUTE=60 # Rate limiting
MAX_CONCURRENT_REQUESTS=5 # Parallel requests
MAX_RETRIES=5 # Retry failed requests
# Logging & UI
LOG_LEVEL=INFO # Logging detail (DEBUG, INFO, WARNING, ERROR)
ENABLE_LOGGING=true # Enable/disable logging
USE_EMOJIS=true # Enable/disable emojis (auto-detected on Windows)
# Storage
STORAGE_TYPE=file # Storage type: 'file' or 'memory'
CONVERSATIONS_DIR=conversations # Directory for saved conversations (file storage)
AUTO_SAVE_CONVERSATIONS=true # Auto-save conversations
Using Different Models
# In your code
client = OpenRouterClient(model="anthropic/claude-3-sonnet")
# Or via environment
export DEFAULT_MODEL="openai/gpt-4"
Popular models:
anthropic/claude-sonnet-4- Ultimate coding agentx-ai/grok-4-fast:free- Multimodal speed demon, 2M contextopenai/gpt-5-codex- Engineering powerhouse, structured outputsdeepseek/deepseek-chat-v3.1:free- Hybrid reasoning beast, free modedeepseek/deepseek-v3.2-exp- Experimental sparse attention for long contexts
🔄 Reliability Features
OpRouter automatically handles common issues:
- Rate Limits: Waits and retries when you hit API limits
- Network Issues: Retries failed requests with smart delays
- Concurrent Requests: Manages multiple requests safely
- Error Recovery: Graceful handling of API errors
You don't need to worry about these - OpRouter handles them automatically!
🎛️ Advanced Configuration
Windows Compatibility
OpRouter automatically detects Windows Command Prompt and disables emojis for better compatibility. If you're using Windows Terminal or want to force emoji usage:
export USE_EMOJIS=true
Storage Types
File Storage (default): Conversations saved to disk, persist between sessions
export STORAGE_TYPE=file
Memory Storage: Conversations only exist during the session, faster but temporary
export STORAGE_TYPE=memory
Future Storage Types: The system is designed to support additional storage types like:
database- Store in SQL/NoSQL databasecloud- Store in cloud services (AWS S3, Google Cloud, etc.)encrypted- Encrypted local storage
Logging Control
Enable logging (default):
export ENABLE_LOGGING=true
export LOG_LEVEL=INFO
Disable logging for cleaner output:
export ENABLE_LOGGING=false
📊 Monitoring Usage
Track Token Usage
# After a conversation
print(f"Tokens used: {conversation.metadata.total_tokens}")
print(f"Estimated cost: ${conversation.metadata.total_cost:.4f}")
Check API Health
async with OpenRouterClient() as client:
if await client.health_check():
print("✅ API is working")
else:
print("❌ API issues")
🆘 Troubleshooting
"API key not found"
export OPENROUTER_API_KEY="your_key_here"
"Rate limit exceeded" OpRouter automatically handles this - just wait a moment.
"Connection failed" Check your internet connection and try again.
Emojis not showing on Windows?
# Disable emojis
export USE_EMOJIS=false
# Or use Windows Terminal for better emoji support
Want to disable logging?
export ENABLE_LOGGING=false
Want conversations in memory only?
export STORAGE_TYPE=memory
Need help?
Check the logs in oprouter.log or open an issue on GitHub.
📄 License
MIT License - feel free to use in your projects!
🙏 Credits
- OpenRouter for the AI API
- Built with Python, aiohttp, and Rich
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file oprouter-1.0.1.tar.gz.
File metadata
- Download URL: oprouter-1.0.1.tar.gz
- Upload date:
- Size: 28.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8886717f0d57f69519f23c731b4fab908fe5361b01515e5e242d1daf1121ce90
|
|
| MD5 |
ba84df0bc6a677c3a538c3a92a4b0659
|
|
| BLAKE2b-256 |
378696f831257c83a5270cdd0357f75ce52ca3bb7df353d2015faf819b917dac
|
File details
Details for the file oprouter-1.0.1-py3-none-any.whl.
File metadata
- Download URL: oprouter-1.0.1-py3-none-any.whl
- Upload date:
- Size: 29.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bdb8383fee9313bf11c3bcc95f72991101b864ee0d5544fc6ff3a1242021a24b
|
|
| MD5 |
cde3485c43b1032f6006164a12b754dd
|
|
| BLAKE2b-256 |
42d289ac2ca23ea55b0ec7eb8832fbb21a624f5c9ffa96128673f55e0a25d4d9
|