Production-focused Python library for intelligent LLM routing and multi-provider management
Project description
JustLLMs
A production-ready Python library focused on intelligent LLM routing and multi-provider management.
Why JustLLMs?
Managing multiple LLM providers is complex. You need to handle different APIs, optimize costs, and ensure reliability. JustLLMs solves these challenges by providing a unified interface that automatically routes requests to the best provider based on your criteria—whether that's cost, speed, or quality.
Key Differentiator: Advanced cluster-based routing using semantic embeddings to intelligently route queries to optimal models based on research from AvengersPro.
Installation
pip install justllms
Package size: Minimal | Lines of code: ~7K | Dependencies: Production-focused
Quick Start
from justllms import JustLLM
# Initialize with your API keys
client = JustLLM({
"providers": {
"openai": {"api_key": "your-openai-key"},
"google": {"api_key": "your-google-key"},
"anthropic": {"api_key": "your-anthropic-key"}
}
})
# Simple completion - automatically routes to best provider
response = client.completion.create(
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)
print(response.content)
Core Features
Multi-Provider Support
Connect to all major LLM providers with a single, consistent interface:
- OpenAI (GPT-5, GPT-4, etc.)
- Google (Gemini 2.5, Gemini 1.5 models)
- Anthropic (Claude 4, Claude 3.5 models)
- Azure OpenAI (with deployment mapping)
- xAI Grok, DeepSeek, and more
# Switch between providers seamlessly
client = JustLLM({
"providers": {
"openai": {"api_key": "your-key"},
"google": {"api_key": "your-key"},
"anthropic": {"api_key": "your-key"}
}
})
# Same interface, different providers automatically chosen
response1 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}],
provider="openai" # Force specific provider
)
response2 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}]
# Auto-routes to best provider based on your strategy
)
Intelligent Routing
The game-changing feature that sets JustLLMs apart. Instead of manually choosing models, let our intelligent routing engine automatically select the optimal provider and model for each request based on your priorities.
Available Strategies
🆕 Cluster-Based Routing - AI-Powered Query Analysis Our most advanced routing strategy uses machine learning to analyze query semantics and route to the optimal model based on similarity to training data. Achieves +7% accuracy improvement and -27% cost reduction compared to single-model approaches.
# Cluster-based routing (recommended for production)
client = JustLLM({
"providers": {...},
"routing": {"strategy": "cluster"}
})
Based on research from Beyond GPT-5: Making LLMs Cheaper and Better via Performance–Efficiency Optimized Routing - AvengersPro framework
Traditional Routing Strategies
# Cost-optimized: Always picks the cheapest option
client = JustLLM({
"providers": {...},
"routing": {"strategy": "cost"}
})
# Speed-optimized: Prioritizes fastest response times
client = JustLLM({
"providers": {...},
"routing": {"strategy": "latency"}
})
# Quality-optimized: Uses the best models for complex tasks
client = JustLLM({
"providers": {...},
"routing": {"strategy": "quality"}
})
# Task-based: Automatically detects query type and routes accordingly
client = JustLLM({
"providers": {...},
"routing": {"strategy": "task"}
})
How Cluster Routing Works
- Query Analysis: Your request is embedded using Qwen3-Embedding-0.6B
- Cluster Matching: Finds the most similar cluster from pre-trained data
- Model Selection: Routes to the best-performing model for that cluster
- Fallback: Falls back to quality-based routing if needed
Result: Up to 60% cost reduction while improving accuracy, with automatic failover to backup providers.
Cost Estimation
Get cost estimates before making requests:
# Estimate costs for different strategies
cost_estimate = client.estimate_cost(
messages=[{"role": "user", "content": "Explain AI"}],
strategy="cluster"
)
print(f"Estimated cost: ${cost_estimate.total_cost:.4f}")
print(f"Selected model: {cost_estimate.selected_model}")
print(f"Provider: {cost_estimate.provider}")
Configuration Management
Flexible configuration with environment variable support:
# Environment-based config
import os
client = JustLLM({
"providers": {
"openai": {"api_key": os.getenv("OPENAI_API_KEY")},
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
"resource_name": os.getenv("AZURE_RESOURCE_NAME"),
"api_version": "2024-12-01-preview"
}
}
})
# File-based config
import yaml
with open("config.yaml") as f:
config = yaml.safe_load(f)
client = JustLLM(config)
🏆 Comparison with Alternatives
| Feature | JustLLMs | LangChain | LiteLLM | OpenAI SDK |
|---|---|---|---|---|
| Package Size | Minimal | ~50MB | ~5MB | ~1MB |
| Setup Complexity | Simple config | Complex chains | Medium | Simple |
| Multi-Provider | ✅ 6+ providers | ✅ Many integrations | ✅ 100+ providers | ❌ OpenAI only |
| Intelligent Routing | ✅ Cost/speed/quality/cluster | ❌ Manual only | ⚠️ Basic routing | ❌ None |
| Cost Optimization | ✅ Automatic routing | ❌ Manual optimization | ⚠️ Basic cost tracking | ❌ None |
| Production Ready | ✅ Out of the box | ⚠️ Requires setup | ✅ Minimal setup | ⚠️ Basic features |
Production Configuration
For production deployments:
production_config = {
"providers": {
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
"resource_name": "my-enterprise-resource",
"deployment_mapping": {
"gpt-4": "my-gpt4-deployment",
"gpt-3.5-turbo": "my-gpt35-deployment"
}
},
"anthropic": {"api_key": os.getenv("ANTHROPIC_KEY")},
"google": {"api_key": os.getenv("GOOGLE_KEY")}
},
"routing": {
"strategy": "cluster", # Use intelligent cluster-based routing
"fallback_provider": "azure_openai",
"fallback_model": "gpt-3.5-turbo"
}
}
client = JustLLM(production_config)
Key Differentiators
- Cluster-Based Routing: AI-powered query analysis for optimal model selection
- Production Simplicity: Minimal dependencies, focused feature set
- Cost Optimization: Automatic routing to reduce costs by up to 60%
- Unified Interface: Same API across all providers
- Reliability: Built-in fallback and error handling
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file justllms-2.0.0.tar.gz.
File metadata
- Download URL: justllms-2.0.0.tar.gz
- Upload date:
- Size: 105.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f16b4c8d78d1625874c8617cacf1ac6ea61fcbe43375a06f378ae991cee603b5
|
|
| MD5 |
d3c7ec9929893ecd7b656ea1bf4f112f
|
|
| BLAKE2b-256 |
5b5582a337ef4dfc591ba07624f52f9d3a5630cbfcabe879e89d7d67934a440f
|
File details
Details for the file justllms-2.0.0-py3-none-any.whl.
File metadata
- Download URL: justllms-2.0.0-py3-none-any.whl
- Upload date:
- Size: 108.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9b07dd998d457e15a34d1871e28026d60a122224352a5f20e91a87e80f03a368
|
|
| MD5 |
f9f25a3fb237a262f30f528a9d7a71a3
|
|
| BLAKE2b-256 |
37e78d04498e36fc83fb123cc376eaa35fb5afe778e011ac925e3d3dff2fcab8
|