Intelligent No Frills LLM Router - A unified interface for multiple LLM providers
Project description
Nous LLM
Intelligent No Frills LLM Router - A unified Python interface for multiple Large Language Model providers
Overview
Nous LLM provides a clean, unified interface for working with multiple Large Language Model providers including OpenAI, Anthropic Claude, Google Gemini, xAI Grok, and OpenRouter. Built with modern Python practices, full type safety, and production-ready features.
Key Features
- ๐ Unified Interface: Single API for multiple LLM providers
- โก Async Support: Both synchronous and asynchronous interfaces
- ๐ก๏ธ Type Safety: Full typing with Pydantic v2 validation
- ๐ Provider Flexibility: Easy switching between providers and models
- โ๏ธ Serverless Ready: Optimized for AWS Lambda and Google Cloud Run
- ๐จ Error Handling: Comprehensive error taxonomy with provider context
- ๐ Extensible: Plugin architecture for custom providers
Supported Providers
| Provider | Models | Status |
|---|---|---|
| OpenAI | GPT-5, GPT-4o, GPT-4, GPT-3.5-turbo, o1, o2 | โ |
| Anthropic | Claude Opus 4.1, Claude 3.5 Sonnet, Claude 3 Haiku | โ |
| Google Gemini | Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash Lite | โ |
| xAI | Grok 4, Grok 4 Heavy, Grok Beta | โ |
| OpenRouter | Llama 4 Maverick, Llama 3.3 70B, 100+ models via proxy | โ |
๐ Security & Development Requirements
GPG Signing Required
ALL commits to this repository MUST be GPG-signed. This is automatically enforced by a pre-commit hook.
Why GPG Signing?
- ๐ Authentication: Every commit is cryptographically verified
- ๐ก๏ธ Integrity: Commits cannot be tampered with after signing
- ๐ Non-repudiation: Contributors cannot deny authorship of signed commits
- ๐ Supply Chain Security: Protection against commit spoofing attacks
Quick Setup for Contributors
New to the project?
# Automated setup - installs hook and guides through GPG configuration
./scripts/setup-gpg-hook.sh
Already have GPG configured?
# Enable GPG signing for this repository
git config commit.gpgsign true
git config user.signingkey YOUR_KEY_ID
Important Notes
- โ Unsigned commits will be automatically rejected
- โ The pre-commit hook validates your GPG setup before every commit
- ๐ You must add your GPG public key to your GitHub account
- ๐ซ The hook cannot be bypassed with
--no-verify
Need Help?
- ๐ Full Setup Guide: GPG Signing Documentation
- ๐ง Troubleshooting: Run
./scripts/setup-gpg-hook.shfor diagnostics - ๐งช Quick Test: Try making a commit - the hook will guide you if anything's wrong
Supported Providers
| Provider | Models | Status |
|---|---|---|
| OpenAI | GPT-5, GPT-4o, GPT-4, GPT-3.5-turbo, o1, o2 | โ |
| Anthropic | Claude Opus 4.1, Claude 3.5 Sonnet, Claude 3 Haiku | โ |
| Google Gemini | Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash Lite | โ |
| xAI | Grok 4, Grok 4 Heavy, Grok Beta | โ |
| OpenRouter | Llama 4 Maverick, Llama 3.3 70B, 100+ models via proxy | โ |
Installation
Quick Install
# Using pip
pip install nous-llm
# Using uv (recommended)
uv add nous-llm
Installation Options
# Install with specific provider support
pip install nous-llm[openai] # OpenAI only
pip install nous-llm[anthropic] # Anthropic only
pip install nous-llm[all] # All providers
# Development installation
pip install nous-llm[dev] # Includes testing tools
Environment Setup
Set your API keys as environment variables:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="AIza..."
export XAI_API_KEY="xai-..."
export OPENROUTER_API_KEY="sk-or-..."
Or create a .env file:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...
XAI_API_KEY=xai-...
OPENROUTER_API_KEY=sk-or-...
Usage Examples
1. Basic Synchronous Usage
from nous_llm import generate, ProviderConfig, Prompt
# Configure your provider
config = ProviderConfig(
provider="openai",
model="gpt-4o",
api_key="your-api-key" # or set OPENAI_API_KEY env var
)
# Create a prompt
prompt = Prompt(
instructions="You are a helpful assistant.",
input="What is the capital of France?"
)
# Generate response
response = generate(config, prompt)
print(response.text) # "Paris is the capital of France."
2. Asynchronous Usage
import asyncio
from nous_llm import agenenerate, ProviderConfig, Prompt
async def main():
config = ProviderConfig(
provider="anthropic",
model="claude-3-5-sonnet-20241022"
)
prompt = Prompt(
instructions="You are a creative writing assistant.",
input="Write a haiku about coding."
)
response = await agenenerate(config, prompt)
print(response.text)
asyncio.run(main())
3. Client-Based Approach (Recommended for Multiple Calls)
from nous_llm import LLMClient, ProviderConfig, Prompt
# Create a reusable client
client = LLMClient(ProviderConfig(
provider="gemini",
model="gemini-1.5-pro"
))
# Generate multiple responses efficiently
prompts = [
Prompt(instructions="You are helpful.", input="What is AI?"),
Prompt(instructions="You are creative.", input="Write a poem."),
]
for prompt in prompts:
response = client.generate(prompt)
print(f"{response.provider}: {response.text}")
Advanced Features
4. Provider-Specific Parameters
from nous_llm import generate, ProviderConfig, Prompt, GenParams
# OpenAI with reasoning mode
config = ProviderConfig(provider="openai", model="o1-preview")
params = GenParams(
max_tokens=1000,
temperature=0.7,
extra={"reasoning": True} # OpenAI-specific
)
# Anthropic with thinking tokens
config = ProviderConfig(provider="anthropic", model="claude-3-5-sonnet-20241022")
params = GenParams(
extra={"thinking": True} # Anthropic-specific
)
response = generate(config, prompt, params)
5. Custom Base URLs & Proxies
# Use OpenRouter as a proxy for OpenAI models
config = ProviderConfig(
provider="openrouter",
model="openai/gpt-4o",
base_url="https://openrouter.ai/api/v1",
api_key="your-openrouter-key"
)
6. Error Handling
from nous_llm import generate, AuthError, RateLimitError, ProviderError
try:
response = generate(config, prompt)
except AuthError as e:
print(f"Authentication failed: {e}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e}")
except ProviderError as e:
print(f"Provider error: {e}")
Production Integration
FastAPI Web Service
from fastapi import FastAPI, HTTPException
from nous_llm import agenenerate, ProviderConfig, Prompt, AuthError
app = FastAPI(title="Nous LLM API")
@app.post("/generate")
async def generate_text(request: dict):
try:
config = ProviderConfig(**request["config"])
prompt = Prompt(**request["prompt"])
response = await agenenerate(config, prompt)
return {
"text": response.text,
"usage": response.usage,
"provider": response.provider
}
except AuthError as e:
raise HTTPException(status_code=401, detail=str(e))
AWS Lambda Function
import json
from nous_llm import LLMClient, ProviderConfig, Prompt
# Global client for connection reuse across invocations
client = LLMClient(ProviderConfig(
provider="openai",
model="gpt-4o-mini"
))
def lambda_handler(event, context):
try:
prompt = Prompt(
instructions=event["instructions"],
input=event["input"]
)
response = client.generate(prompt)
return {
"statusCode": 200,
"body": json.dumps({
"text": response.text,
"usage": response.usage.model_dump() if response.usage else None
})
}
except Exception as e:
return {
"statusCode": 500,
"body": json.dumps({"error": str(e)})
}
Development
Project Setup
# Clone the repository
git clone https://github.com/amod-ml/nous-llm.git
cd nous-llm
# Install with development dependencies
uv sync --group dev
# Install pre-commit hooks (includes GPG validation)
./scripts/setup-gpg-hook.sh
Testing & Quality
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=nous_llm
# Format and lint code
uv run ruff format
uv run ruff check
# Type checking
uv run mypy src/nous_llm
Adding a New Provider
- Create adapter in
src/nous_llm/adapters/ - Implement the
AdapterProtocol - Register in
src/nous_llm/core/adapters.py - Add model patterns to
src/nous_llm/core/registry.py - Add comprehensive tests in
tests/
Examples & Resources
Complete Examples
- ๐
examples/basic_usage.py- Core functionality demos - ๐
examples/fastapi_service.py- REST API service - ๐
examples/lambda_example.py- AWS Lambda function
Documentation & Support
- ๐ Full Documentation
- ๐ Issue Tracker
- ๐ฌ Discussions
Contributing
We welcome contributions!
Requirements
- โ Python 3.12+
- ๐ All commits must be GPG-signed
- ๐งช Code must pass all tests and linting
- ๐ Follow established patterns and conventions
License
This project is licensed under the Mozilla Public License 2.0 - see the LICENSE file for details.
๐ GPG signing ensures the authenticity and integrity of all code contributions.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nous_llm-0.1.1.tar.gz.
File metadata
- Download URL: nous_llm-0.1.1.tar.gz
- Upload date:
- Size: 123.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
833a4fe43c616fa4c49c2a2cbd6addaf9e38aa7614a0212101b7354e9154e59b
|
|
| MD5 |
af59579525bfc532466e236a1c2e5668
|
|
| BLAKE2b-256 |
6f87473e8ea5c3cf1ef78f407a2e665e8778662cac96fcb2573708c69a5b1535
|
Provenance
The following attestation bundles were made for nous_llm-0.1.1.tar.gz:
Publisher:
release.yml on amod-ml/nous-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nous_llm-0.1.1.tar.gz -
Subject digest:
833a4fe43c616fa4c49c2a2cbd6addaf9e38aa7614a0212101b7354e9154e59b - Sigstore transparency entry: 427058789
- Sigstore integration time:
-
Permalink:
amod-ml/nous-llm@c728246d0a31ed735999541da0b129be8f8d7391 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/amod-ml
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c728246d0a31ed735999541da0b129be8f8d7391 -
Trigger Event:
push
-
Statement type:
File details
Details for the file nous_llm-0.1.1-py3-none-any.whl.
File metadata
- Download URL: nous_llm-0.1.1-py3-none-any.whl
- Upload date:
- Size: 34.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
38b4cf946841cde7490c43b32010c9f09d14fe23b0175f3c528c87663fb5b1f3
|
|
| MD5 |
41b460fcd8eb1915b839a713ea0c9660
|
|
| BLAKE2b-256 |
a93d1b9f388c9f5b12b6dd00d620d85c41d7a208716b787bc657a39b18c68d1f
|
Provenance
The following attestation bundles were made for nous_llm-0.1.1-py3-none-any.whl:
Publisher:
release.yml on amod-ml/nous-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nous_llm-0.1.1-py3-none-any.whl -
Subject digest:
38b4cf946841cde7490c43b32010c9f09d14fe23b0175f3c528c87663fb5b1f3 - Sigstore transparency entry: 427058798
- Sigstore integration time:
-
Permalink:
amod-ml/nous-llm@c728246d0a31ed735999541da0b129be8f8d7391 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/amod-ml
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@c728246d0a31ed735999541da0b129be8f8d7391 -
Trigger Event:
push
-
Statement type: