Intelligent model selection for optimal cost-effectiveness with Anthropic's Claude AI
Project description
Claude Model Selector
Intelligent model selection for optimal cost-effectiveness with Anthropic's Claude AI
Automatically choose the most cost-effective Claude model (Opus, Sonnet, or Haiku) for each task based on intelligent complexity analysis. Save 70-95% on AI costs while maintaining quality.
Sponsored by AeonBridge Co.
🎯 Features
- Automatic Complexity Analysis - Analyzes task descriptions and scores complexity (0-100)
- Intelligent Model Selection - Chooses optimal model based on complexity
- Cost Optimization - Save 70-95% compared to using premium models for everything
- Context-Aware - Considers additional context for better accuracy
- Confidence Scoring - Provides confidence levels for recommendations
- Batch Processing - Analyze multiple tasks efficiently
- CLI & API - Both command-line and programmatic interfaces
- Customizable - Easily configure thresholds and rules
- Zero Dependencies - Pure Python, no external dependencies required
📊 Model Selection Strategy
| Model | Complexity Score | Speed | Cost (Input/Output per MTok) | Best For |
|---|---|---|---|---|
| Haiku | 0-30 | Fastest | $0.80 / $4.00 | Simple, quick tasks |
| Sonnet | 31-70 | Balanced | $3.00 / $15.00 | Standard, reliable tasks |
| Opus | 71-100 | Slowest | $15.00 / $75.00 | Complex, critical tasks |
🚀 Quick Start
Installation
# Clone the repository
git clone https://github.com/aeonbridge/claude-model-selector.git
cd claude-model-selector
# Install the package
pip install -e .
# Or install from PyPI (when published)
pip install claude-model-selector
Basic Usage
CLI
# Analyze a single task
claude-model-selector analyze "Design a scalable microservices architecture"
# Compare models with token estimate
claude-model-selector compare "Process 100 videos" --tokens 50000
# Batch analyze tasks
echo "Task 1\nTask 2\nTask 3" > tasks.txt
claude-model-selector batch tasks.txt
# Show model information
claude-model-selector info
Python API
from claude_model_selector import ClaudeModelSelector, quick_select
# Quick selection (one-liner)
model = quick_select("List all Python files")
# Returns: 'haiku'
# Detailed analysis
selector = ClaudeModelSelector()
analysis = selector.analyze_task("Design a scalable architecture")
print(f"Model: {analysis.recommended_model.upper()}")
print(f"Complexity: {analysis.complexity_score:.1f}/100")
print(f"Confidence: {analysis.confidence:.0%}")
print(f"Cost: ${analysis.estimated_cost:.6f}")
print(f"Reasoning: {analysis.reasoning}")
💰 Cost Savings Example
Scenario: Processing a batch of 10 mixed tasks
from claude_model_selector import ClaudeModelSelector
selector = ClaudeModelSelector()
tasks = [
"List all files in directory",
"Analyze code for security vulnerabilities",
"Design comprehensive system architecture",
"Convert JSON to CSV",
"Plan migration strategy",
]
total_cost = 0
for task in tasks:
analysis = selector.analyze_task(task)
total_cost += analysis.estimated_cost
print(f"{task}: {analysis.recommended_model.upper()}")
print(f"\nOptimized cost: ${total_cost:.6f}")
print(f"Using Opus for all: ${total_cost * 4.5:.6f}")
print(f"Savings: {((1 - total_cost / (total_cost * 4.5)) * 100):.1f}%")
Output:
List all files: HAIKU
Analyze code: SONNET
Design architecture: OPUS
Convert JSON: HAIKU
Plan migration: OPUS
Optimized cost: $0.032
Using Opus for all: $0.144
Savings: 77.8%
📖 Documentation
How It Works
The selector uses a multi-factor algorithm to calculate complexity:
-
Keyword Analysis
- Simple indicators (
list,extract,quick) → Lower complexity - Standard indicators (
analyze,implement,create) → Medium complexity - Complex indicators (
design,architect,plan) → Higher complexity
- Simple indicators (
-
Pattern Matching
- Planning tasks → +40 points
- Complex coding → +35 points
- Research/analysis → +30 points
- Simple operations → -30 points
-
Context Factors
- Task description length
- Additional context provided
- Multi-step indicators
- Uncertainty markers
-
Final Score (0-100) → Model Selection
- 0-30: Haiku (fast & cheap)
- 31-70: Sonnet (balanced)
- 71-100: Opus (powerful)
CLI Commands
analyze - Analyze a Task
# Basic analysis
claude-model-selector analyze "Your task description"
# With additional context
claude-model-selector analyze "Optimize this code" --context-file code.py
# JSON output
claude-model-selector analyze "Task" --json
# Save results
claude-model-selector analyze "Task" --output analysis.json
# Verbose mode
claude-model-selector analyze "Task" --verbose
compare - Compare Models
# Compare all models for a task
claude-model-selector compare "Implement authentication"
# With custom token estimate
claude-model-selector compare "Large batch job" --tokens 100000
batch - Batch Processing
# Analyze tasks from file
claude-model-selector batch tasks.txt
# With verbose output
claude-model-selector batch tasks.txt --verbose
# Save results
claude-model-selector batch tasks.txt --output results.json
info - Model Information
# Show all models
claude-model-selector info
# Specific model
claude-model-selector info --model opus
Python API Reference
quick_select(task: str) -> str
Fast model selection without full analysis.
from claude_model_selector import quick_select
model = quick_select("Design scalable architecture")
# Returns: 'opus'
ClaudeModelSelector
Main selector class for detailed analysis.
from claude_model_selector import ClaudeModelSelector
selector = ClaudeModelSelector()
# Analyze task
analysis = selector.analyze_task(
task="Your task description",
context="Optional additional context"
)
# Access results
print(analysis.recommended_model) # 'haiku', 'sonnet', or 'opus'
print(analysis.complexity_score) # 0-100
print(analysis.confidence) # 0-1
print(analysis.estimated_cost) # USD
print(analysis.reasoning) # Explanation
# Compare models
comparisons = selector.compare_models(
task="Your task",
estimated_tokens=50000
)
# Get model info
info = selector.get_model_info('opus')
⚙️ Configuration
Customize behavior by creating config.json:
{
"thresholds": {
"haiku_max": 30,
"sonnet_max": 70
},
"default_model": "sonnet",
"cost_optimization": true,
"custom_rules": {
"force_opus_keywords": ["critical", "production", "security"],
"force_haiku_keywords": ["trivial", "simple", "quick"]
}
}
Load custom configuration:
from pathlib import Path
from claude_model_selector import ClaudeModelSelector
selector = ClaudeModelSelector(config_path=Path('config.json'))
🎓 Examples
Example 1: Integration with API Calls
import anthropic
from claude_model_selector import quick_select
def smart_claude_call(task, content):
"""Call Claude with optimal model selection"""
model_name = quick_select(task)
# Map to actual model IDs
model_map = {
'haiku': 'claude-3-haiku-20240307',
'sonnet': 'claude-3-5-sonnet-20241022',
'opus': 'claude-3-opus-20240229'
}
client = anthropic.Anthropic(api_key="your-key")
response = client.messages.create(
model=model_map[model_name],
max_tokens=1024,
messages=[{"role": "user", "content": content}]
)
return response
# Use it
result = smart_claude_call(
task="Analyze this code for bugs",
content="def foo(): return bar"
)
Example 2: Cost Tracking
from claude_model_selector import ClaudeModelSelector
selector = ClaudeModelSelector()
tasks = ["Task 1", "Task 2", "Task 3"]
total_cost = 0
for task in tasks:
analysis = selector.analyze_task(task)
total_cost += analysis.estimated_cost
print(f"Estimated total cost: ${total_cost:.6f}")
Example 3: Confidence-Based Decisions
from claude_model_selector import ClaudeModelSelector
selector = ClaudeModelSelector()
analysis = selector.analyze_task("Ambiguous task")
if analysis.confidence < 0.7:
print(f"⚠️ Low confidence ({analysis.confidence:.0%})")
print(f"Consider: More specific task description")
print(f"Reasoning: {analysis.reasoning}")
else:
print(f"✓ Recommended: {analysis.recommended_model.upper()}")
🧪 Testing
# Run tests
python -m pytest tests/
# With coverage
python -m pytest tests/ --cov=claude_model_selector
# Run examples
python examples/basic_usage.py
python examples/batch_processing.py
🤝 Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Development Setup
# Clone the repository
git clone https://github.com/aeonbridge/claude-model-selector.git
cd claude-model-selector
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src/ tests/
📝 License
MIT License - see LICENSE file for details.
Copyright (c) 2025 AeonBridge Co.
🙏 Acknowledgments
- Built with ❤️ by AeonBridge Co.
- Inspired by the need for cost-effective AI usage
- Thanks to Anthropic for creating Claude
📞 Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@aeonbridge.com
🗺️ Roadmap
- PyPI package publication
- Integration examples for popular frameworks
- Web UI for visual analysis
- Advanced ML-based complexity prediction
- Support for other AI model providers
- Cost tracking and analytics dashboard
- Team collaboration features
- CI/CD integration templates
⭐ Star History
If you find this project useful, please consider giving it a star on GitHub!
Made with ❤️ by AeonBridge Co.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file claude_model_selector-1.0.0.tar.gz.
File metadata
- Download URL: claude_model_selector-1.0.0.tar.gz
- Upload date:
- Size: 35.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb1b25aec92e41e441a3db7f8268efbd26c33cbadba39000826e558b4b0676ca
|
|
| MD5 |
bff0c4d88f8d30751cb9ebc917b7b551
|
|
| BLAKE2b-256 |
744941d2b7c57e7d2a558b9aaf0e0b732ef570335a302b53568443db4f54dc8c
|
Provenance
The following attestation bundles were made for claude_model_selector-1.0.0.tar.gz:
Publisher:
publish.yml on aeonbridge/ab-claude-model-selector
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
claude_model_selector-1.0.0.tar.gz -
Subject digest:
bb1b25aec92e41e441a3db7f8268efbd26c33cbadba39000826e558b4b0676ca - Sigstore transparency entry: 748001544
- Sigstore integration time:
-
Permalink:
aeonbridge/ab-claude-model-selector@fd2bb49c76ad7069b94afbc47e8c1ce75bf27d2b -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/aeonbridge
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fd2bb49c76ad7069b94afbc47e8c1ce75bf27d2b -
Trigger Event:
release
-
Statement type:
File details
Details for the file claude_model_selector-1.0.0-py3-none-any.whl.
File metadata
- Download URL: claude_model_selector-1.0.0-py3-none-any.whl
- Upload date:
- Size: 16.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b084c42a02b36ad7a887d12b76bd8a13eb31dac10f67fed121fbf01b67b64815
|
|
| MD5 |
b92e22610778b230993a1092f2c3289c
|
|
| BLAKE2b-256 |
14f79bce75767ac5cbf2a07a822aacb903390531327b179c5a0d5113de494d52
|
Provenance
The following attestation bundles were made for claude_model_selector-1.0.0-py3-none-any.whl:
Publisher:
publish.yml on aeonbridge/ab-claude-model-selector
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
claude_model_selector-1.0.0-py3-none-any.whl -
Subject digest:
b084c42a02b36ad7a887d12b76bd8a13eb31dac10f67fed121fbf01b67b64815 - Sigstore transparency entry: 748001546
- Sigstore integration time:
-
Permalink:
aeonbridge/ab-claude-model-selector@fd2bb49c76ad7069b94afbc47e8c1ce75bf27d2b -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/aeonbridge
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fd2bb49c76ad7069b94afbc47e8c1ce75bf27d2b -
Trigger Event:
release
-
Statement type: