Production-ready Python library for multi-provider LLM orchestration with intelligent routing
Project description
JustLLMs
A production-ready Python library that simplifies working with multiple Large Language Model providers through intelligent routing, comprehensive analytics, and enterprise-grade features.
Why JustLLMs?
Managing multiple LLM providers is complex. You need to handle different APIs, optimize costs, monitor usage, and ensure reliability. JustLLMs solves these challenges by providing a unified interface that automatically routes requests to the best provider based on your criteria—whether that's cost, speed, or quality.
Installation
# Basic installation
pip install justllms
# With PDF export capabilities
pip install justllms[pdf]
# All optional dependencies (PDF export, Redis caching, advanced analytics)
pip install justllms[all]
Package size: 1.1MB | Lines of code: ~11K | Dependencies: Minimal production requirements
Quick Start
from justllms import JustLLM
# Initialize with your API keys
client = JustLLM({
"providers": {
"openai": {"api_key": "your-openai-key"},
"google": {"api_key": "your-google-key"},
"anthropic": {"api_key": "your-anthropic-key"}
}
})
# Simple completion - automatically routes to best provider
response = client.completion.create(
messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)
print(response.content)
Core Features
Multi-Provider Support
Connect to all major LLM providers with a single, consistent interface:
- OpenAI (GPT-5, GPT-4, etc.) <yes, you can use GPT 5 :)>
- Google (Gemini 2.5, Gemini 1.5 models)
- Anthropic (Claude 3.5, Claude 3 models)
- Azure OpenAI (with deployment mapping)
- xAI Grok, DeepSeek, and more
# Switch between providers seamlessly
client = JustLLM({
"providers": {
"openai": {"api_key": "your-key"},
"google": {"api_key": "your-key"},
"anthropic": {"api_key": "your-key"}
}
})
# Same interface, different providers automatically chosen
response1 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}],
provider="openai" # Force specific provider
)
response2 = client.completion.create(
messages=[{"role": "user", "content": "Explain AI"}]
# Auto-routes to best provider based on your strategy
)
Intelligent Routing
The game-changing feature that sets JustLLMs apart. Instead of manually choosing models, let our intelligent routing engine automatically select the optimal provider and model for each request based on your priorities.
How It Works
Our routing engine analyzes each request and considers:
- Cost efficiency - Real-time pricing across all providers
- Performance metrics - Historical latency and success rates
- Model capabilities - Task complexity and model strengths
- Provider health - Current availability and response times
# Cost-optimized: Always picks the cheapest option
client = JustLLM({
"providers": {...},
"routing": {"strategy": "cost"}
})
# Speed-optimized: Prioritizes fastest response times
# Routes to providers with lowest latency in your region
client = JustLLM({
"providers": {...},
"routing": {"strategy": "latency"}
})
# Quality-optimized: Uses the best models for complex tasks
client = JustLLM({
"providers": {...},
"routing": {"strategy": "quality"}
})
# Advanced: Custom routing with business rules
client = JustLLM({
"providers": {...},
"routing": {
"strategy": "hybrid",
"cost_weight": 0.4,
"quality_weight": 0.6,
"max_cost_per_request": 0.05,
"fallback_provider": "openai"
}
})
Result: 60% cost reduction on average while maintaining quality, with automatic failover to backup providers.
Real-time Streaming
Full streaming support with proper token handling across all providers:
stream = client.completion.create(
messages=[{"role": "user", "content": "Write a short story"}],
stream=True
)
for chunk in stream:
print(chunk.content, end="", flush=True)
Conversation Management
Built-in conversation state management with context preservation:
# Create client
conversation = Conversation(client=client)
# Set system message
conversation.add_system_message("You are a helpful math tutor. Keep answers concise.")
# Turn 1
response = conversation.send("What is 15 + 25?")
# Turn 2 - Context is automatically preserved
response = conversation.send("Now divide that by 8")
# Get conversation stats
history = conversation.get_history()
Conversation Features:
- Auto-save: Persist conversations automatically
- Context management: Smart context window handling
- Export/Import: JSON, Markdown, and TXT formats
- Analytics: Track usage, costs, and performance per conversation
- Search: Find conversations by content or metadata
Smart Caching
Intelligent response caching that dramatically reduces costs and improves response times:
client = JustLLM({
"providers": {...},
"caching": {
"enabled": True,
"ttl": 3600, # 1 hour
"max_size": 1000
}
})
# First call - cache miss
response1 = client.completion.create(
messages=[{"role": "user", "content": "What is AI?"}]
) # ~2 seconds, full cost
# Second call - cache hit
response2 = client.completion.create(
messages=[{"role": "user", "content": "What is AI?"}]
) # ~50ms, no cost
Enterprise Analytics
Comprehensive usage tracking and cost analysis that gives you complete visibility into your LLM operations. Unlike other solutions that require external tools, JustLLMs provides built-in analytics that finance and engineering teams actually need.
What You Get
- Cross-provider metrics: Compare performance across providers
- Cost tracking: Detailed cost analysis per model/provider
- Performance insights: Latency, throughput, success rates
- Export capabilities: CSV, PDF with charts
- Time series analysis: Usage patterns over time
- Top models/providers: Usage and cost rankings
# Generate detailed reports
report = client.analytics.generate_report()
print(f"Total requests: {report.cross_provider_metrics.total_requests}")
print(f"Total cost: ${report.cross_provider_metrics.total_cost:.2f}")
print(f"Fastest provider: {report.cross_provider_metrics.fastest_provider}")
print(f"Cost per request: ${report.cross_provider_metrics.avg_cost_per_request:.4f}")
# Get granular insights
print(f"Cache hit rate: {report.performance_metrics.cache_hit_rate:.1f}%")
print(f"Token efficiency: {report.optimization_suggestions.token_savings:.1f}%")
# Export reports for finance teams
from justllms.analytics.reports import CSVExporter, PDFExporter
csv_exporter = CSVExporter()
csv_exporter.export(report, "monthly_llm_costs.csv")
pdf_exporter = PDFExporter(include_charts=True)
pdf_exporter.export(report, "executive_summary.pdf")
Business Impact: Teams typically save 40-70% on LLM costs within the first month by identifying usage patterns and optimizing model selection.
Retrieval-Augmented Generation (RAG)
Enterprise-ready document search and knowledge retrieval that seamlessly integrates with your LLM workflows. Transform your documents into a searchable knowledge base that enhances LLM responses with contextual information.
RAG Features
- Multiple Vector Stores - Support for Pinecone (cloud) and ChromaDB (local)
- Built-in Embeddings - Uses vector store native embeddings (no external providers needed)
- PDF Document Processing - Automatic text extraction, chunking, and metadata extraction
- Intelligent Chunking - Configurable chunk size, overlap, and splitting strategies
- Semantic Search - Find relevant documents using natural language queries
- One-Step RAG Completion - Retrieve and generate responses in a single API call
from justllms import Client
# Configure with Pinecone (uses built-in llama-text-embed-v2)
pinecone_client = Client({
"providers": {
"google": {"api_key": "your-gemini-key"}
},
"retrieval": {
"vector_store": {
"type": "pinecone",
"api_key": "your-pinecone-key",
"environment": "your-pinecone-host",
"index_name": "your-index-name"
},
"chunk_size": 1000,
"chunk_overlap": 200,
"default_k": 5
}
})
# Configure with ChromaDB (uses built-in all-MiniLM-L6-v2)
chromadb_client = Client({
"providers": {
"google": {"api_key": "your-gemini-key"}
},
"retrieval": {
"vector_store": {"type": "chroma"},
"chunk_size": 1000,
"chunk_overlap": 200,
"default_k": 5
}
})
# Create knowledge collection
success = client.retrieval.create_collection("company_docs")
# Ingest PDF documents
result = client.retrieval.ingest_documents([
"strategic_plan.pdf",
"technical_specs.pdf"
], "company_docs")
# RAG completion - retrieve relevant docs and generate response
response = client.completion.retrieve_and_complete(
query="What are our key strategic recommendations?",
collection="company_docs",
model="gemini-2.5-flash",
k=3, # Retrieve top 3 relevant documents
include_metadata=True,
temperature=0.7
)
print(f"Answer: {response.content}")
print(f"Retrieved {len(response.retrieved_documents)} relevant documents")
Knowledge Enhancement: Turn your documents into an AI-accessible knowledge base, reducing hallucinations and providing contextually accurate responses.
Business Rule Validation
Enterprise-grade content filtering and compliance built for regulated industries. Ensure your LLM applications meet security, privacy, and business requirements without custom development.
Compliance Features
- PII Detection - Automatically detect and handle social security numbers, credit cards, phone numbers
- Content Filtering - Block inappropriate content, profanity, or sensitive topics
- Custom Business Rules - Define your own validation logic with regex patterns or custom functions
- Audit Trail - Complete logging of all validation actions for compliance reporting
from justllms.validation import ValidationConfig, BusinessRule, RuleType, ValidationAction
client = JustLLM({
"providers": {...},
"validation": ValidationConfig(
enabled=True,
business_rules=[
# Block sensitive data patterns
BusinessRule(
name="no_ssn",
type=RuleType.PATTERNS,
pattern=r"\\b\\d{3}-\\d{2}-\\d{4}\\b",
action=ValidationAction.BLOCK,
message="SSN detected - request blocked for privacy"
),
# Content filtering
BusinessRule(
name="professional_content",
type=RuleType.CONTENT_FILTER,
categories=["hate", "violence", "adult"],
action=ValidationAction.SANITIZE
),
# Custom business logic
BusinessRule(
name="company_policy",
type=RuleType.CUSTOM,
validator=lambda content: "competitor" not in content.lower(),
action=ValidationAction.WARN
)
],
# Compliance presets
compliance_mode="GDPR", # or "HIPAA", "PCI_DSS"
audit_logging=True
)
})
# All requests are automatically validated
response = client.completion.create(
messages=[{"role": "user", "content": "My SSN is 123-45-6789"}]
)
# This request would be blocked and logged for compliance
Regulatory Compliance: Built-in support for major compliance frameworks saves months of custom security development.
Advanced Usage
Async Operations
Full async/await support for high-performance applications:
import asyncio
async def process_batch():
tasks = []
for prompt in prompts:
task = client.completion.acreate(
messages=[{"role": "user", "content": prompt}]
)
tasks.append(task)
responses = await asyncio.gather(*tasks)
return responses
Error Handling & Reliability
Automatic retries and fallback providers ensure high availability:
client = JustLLM({
"providers": {...},
"retry": {
"max_attempts": 3,
"backoff_factor": 2,
"retry_on": ["timeout", "rate_limit", "server_error"]
}
})
# Automatically retries on failures
try:
response = client.completion.create(
messages=[{"role": "user", "content": "Hello"}],
provider="invalid-provider" # Will fail and retry
)
except Exception as e:
print(f"All retries failed: {e}")
Configuration Management
Flexible configuration with environment variable support:
# Environment-based config
import os
client = JustLLM({
"providers": {
"openai": {"api_key": os.getenv("OPENAI_API_KEY")},
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"resource_name": os.getenv("AZURE_RESOURCE_NAME"),
"api_version": "2024-12-01-preview"
}
}
})
# File-based config
import yaml
with open("config.yaml") as f:
config = yaml.safe_load(f)
client = JustLLM(config)
🏆 Comparison with Alternatives
| Feature | JustLLMs | LangChain | LiteLLM | OpenAI SDK | Haystack |
|---|---|---|---|---|---|
| Package Size | 1.1MB | ~50MB | ~5MB | ~1MB | ~20MB |
| Setup Complexity | Simple config | Complex chains | Medium | Simple | Complex |
| Multi-Provider | ✅ 6+ providers | ✅ Many integrations | ✅ 100+ providers | ❌ OpenAI only | ✅ Limited LLMs |
| Intelligent Routing | ✅ Cost/speed/quality | ❌ Manual only | ⚠️ Basic routing | ❌ None | ❌ Pipeline-based |
| Built-in Analytics | ✅ Enterprise-grade | ❌ External tools needed | ⚠️ Basic metrics | ❌ None | ⚠️ Pipeline metrics |
| Conversation Management | ✅ Full lifecycle | ⚠️ Memory components | ❌ None | ❌ Manual handling | ✅ Dialog systems |
| RAG Support | ✅ Built-in vector stores | ✅ Multiple integrations | ❌ None | ❌ None | ✅ Document retrieval |
| Business Rules | ✅ Content validation | ❌ Custom implementation | ❌ None | ❌ None | ⚠️ Custom filters |
| Cost Optimization | ✅ Automatic routing | ❌ Manual optimization | ⚠️ Basic cost tracking | ❌ None | ❌ None |
| Streaming Support | ✅ All providers | ✅ Provider-dependent | ✅ Most providers | ✅ OpenAI only | ⚠️ Limited |
| Production Ready | ✅ Out of the box | ⚠️ Requires setup | ✅ Minimal setup | ⚠️ Basic features | ✅ Complex setup |
| Caching | ✅ Multi-backend | ⚠️ Custom implementation | ✅ Basic caching | ❌ None | ✅ Document stores |
Enterprise Configuration
For production deployments with advanced features:
enterprise_config = {
"providers": {
"azure_openai": {
"api_key": os.getenv("AZURE_OPENAI_KEY"),
"resource_name": "my-enterprise-resource",
"deployment_mapping": {
"gpt-4": "my-gpt4-deployment",
"gpt-3.5-turbo": "my-gpt35-deployment"
}
},
"anthropic": {"api_key": os.getenv("ANTHROPIC_KEY")},
"google": {"api_key": os.getenv("GOOGLE_KEY")}
},
"routing": {
"strategy": "cost",
"fallback_provider": "azure_openai",
"fallback_model": "gpt-3.5-turbo"
},
"validation": {
"enabled": True,
"business_rules": [
# PII detection, content filtering, compliance rules
]
},
"analytics": {
"enabled": True,
"track_usage": True,
"track_performance": True
},
"caching": {
"enabled": True,
"backend": "redis",
"ttl": 3600
},
"conversations": {
"backend": "disk",
"auto_save": True,
"auto_title": True,
"max_context_tokens": 8000
}
}
client = JustLLM(enterprise_config)
Contributing
We welcome contributions! Whether it's adding new providers, improving routing strategies, or enhancing analytics capabilities.
# Development setup
git clone https://github.com/your-org/justllms.git
cd justllms
pip install -e ".[dev]"
pytest
License
MIT License - see LICENSE file for details.
Support
- Documentation: Comprehensive guides and API reference
- Examples: Ready-to-run code samples in the
examples/directory - Issues: Report bugs and request features via GitHub Issues
- Discussions: Community support and ideas via GitHub Discussions
JustLLMs - Simple to start, powerful to scale, intelligent by design.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file justllms-1.3.0.tar.gz.
File metadata
- Download URL: justllms-1.3.0.tar.gz
- Upload date:
- Size: 102.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1c2546c394bcb347eb52c9f3f6898860ea721b26d699963316777d5c0188d83b
|
|
| MD5 |
b0000c286f2d08180765ceccad997711
|
|
| BLAKE2b-256 |
d02ed44cbd4dd72213484e340462da96ada8fd145f79bf921978a3ac31f5794b
|
File details
Details for the file justllms-1.3.0-py3-none-any.whl.
File metadata
- Download URL: justllms-1.3.0-py3-none-any.whl
- Upload date:
- Size: 118.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d87d8ac702063418c287c0c930b5a28444122a64a87e6aee61f59f236dd6980c
|
|
| MD5 |
eb23b5f09ed46af5f5c1da21806cbebd
|
|
| BLAKE2b-256 |
1e3ca510f13633fc256070304614b3f93f64b72c5f7626093953d80f2a98fd68
|