Skip to main content

Intelligent No Frills LLM Router - A unified interface for multiple LLM providers

Project description

Nous LLM

Intelligent No Frills LLM Router - A unified Python interface for multiple Large Language Model providers

Python 3.12+ License: MPL 2.0 Code style: Ruff

Overview

Nous LLM provides a clean, unified interface for working with multiple Large Language Model providers including OpenAI, Anthropic Claude, Google Gemini, xAI Grok, and OpenRouter. Built with modern Python practices, full type safety, and production-ready features.

Key Features

  • ๐Ÿ”„ Unified Interface: Single API for multiple LLM providers
  • โšก Async Support: Both synchronous and asynchronous interfaces
  • ๐Ÿ›ก๏ธ Type Safety: Full typing with Pydantic v2 validation
  • ๐Ÿ”€ Provider Flexibility: Easy switching between providers and models
  • โ˜๏ธ Serverless Ready: Optimized for AWS Lambda and Google Cloud Run
  • ๐Ÿšจ Error Handling: Comprehensive error taxonomy with provider context
  • ๐Ÿ”Œ Extensible: Plugin architecture for custom providers

Supported Providers

Provider Models Status
OpenAI GPT-5, GPT-5-mini, GPT-5-nano, GPT-4o, GPT-4, GPT-3.5-turbo, o1, o3, o3-mini, o4-mini
Anthropic Claude Opus 4.1, Claude 3.5 Sonnet, Claude 3 Haiku โœ…
Google Gemini Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash Lite โœ…
xAI Grok 4, Grok 4 Heavy, Grok Beta โœ…
OpenRouter Llama 4 Maverick, Llama 3.3 70B, 100+ models via proxy โœ…

๐Ÿ”’ Security & Development Requirements

GPG Signing Required

ALL commits to this repository MUST be GPG-signed. This is automatically enforced by a pre-commit hook.

Why GPG Signing?

  • ๐Ÿ” Authentication: Every commit is cryptographically verified
  • ๐Ÿ›ก๏ธ Integrity: Commits cannot be tampered with after signing
  • ๐Ÿ“ Non-repudiation: Contributors cannot deny authorship of signed commits
  • ๐Ÿ”— Supply Chain Security: Protection against commit spoofing attacks

Quick Setup for Contributors

New to the project?

# Automated setup - installs hook and guides through GPG configuration
./scripts/setup-gpg-hook.sh

Already have GPG configured?

# Enable GPG signing for this repository
git config commit.gpgsign true
git config user.signingkey YOUR_KEY_ID

Important Notes

  • โŒ Unsigned commits will be automatically rejected
  • โœ… The pre-commit hook validates your GPG setup before every commit
  • ๐Ÿ“‹ You must add your GPG public key to your GitHub account
  • ๐Ÿšซ The hook cannot be bypassed with --no-verify

Need Help?

  • ๐Ÿ“– Full Setup Guide: GPG Signing Documentation
  • ๐Ÿ”ง Troubleshooting: Run ./scripts/setup-gpg-hook.sh for diagnostics
  • ๐Ÿงช Quick Test: Try making a commit - the hook will guide you if anything's wrong

Supported Providers

Provider Models Status
OpenAI GPT-5, GPT-5-mini, GPT-5-nano, GPT-4o, GPT-4, GPT-3.5-turbo, o1, o3, o3-mini, o4-mini
Anthropic Claude Opus 4.1, Claude 3.5 Sonnet, Claude 3 Haiku โœ…
Google Gemini Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash Lite โœ…
xAI Grok 4, Grok 4 Heavy, Grok Beta โœ…
OpenRouter Llama 4 Maverick, Llama 3.3 70B, 100+ models via proxy โœ…

Installation

Quick Install

# Using pip
pip install nous-llm

# Using uv (recommended)
uv add nous-llm

Installation Options

# Install with specific provider support
pip install nous-llm[openai]      # OpenAI only
pip install nous-llm[anthropic]   # Anthropic only
pip install nous-llm[all]         # All providers

# Development installation
pip install nous-llm[dev]         # Includes testing tools

Environment Setup

Set your API keys as environment variables:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="AIza..."
export XAI_API_KEY="xai-..."
export OPENROUTER_API_KEY="sk-or-..."

Or create a .env file:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...
XAI_API_KEY=xai-...
OPENROUTER_API_KEY=sk-or-...

Usage Examples

1. Basic Synchronous Usage

from nous_llm import generate, ProviderConfig, Prompt

# Configure your provider
config = ProviderConfig(
    provider="openai",
    model="gpt-4o",
    api_key="your-api-key"  # or set OPENAI_API_KEY env var
)

# Create a prompt
prompt = Prompt(
    instructions="You are a helpful assistant.",
    input="What is the capital of France?"
)

# Generate response
response = generate(config, prompt)
print(response.text)  # "Paris is the capital of France."

2. Asynchronous Usage

import asyncio
from nous_llm import agenenerate, ProviderConfig, Prompt

async def main():
    config = ProviderConfig(
        provider="anthropic",
        model="claude-3-5-sonnet-20241022"
    )
    
    prompt = Prompt(
        instructions="You are a creative writing assistant.",
        input="Write a haiku about coding."
    )
    
    response = await agenenerate(config, prompt)
    print(response.text)

asyncio.run(main())

3. Client-Based Approach (Recommended for Multiple Calls)

from nous_llm import LLMClient, ProviderConfig, Prompt

# Create a reusable client
client = LLMClient(ProviderConfig(
    provider="gemini",
    model="gemini-1.5-pro"
))

# Generate multiple responses efficiently
prompts = [
    Prompt(instructions="You are helpful.", input="What is AI?"),
    Prompt(instructions="You are creative.", input="Write a poem."),
]

for prompt in prompts:
    response = client.generate(prompt)
    print(f"{response.provider}: {response.text}")

Advanced Features

4. Provider-Specific Parameters

from nous_llm import generate, ProviderConfig, Prompt, GenParams

# OpenAI GPT-5 with reasoning mode
config = ProviderConfig(provider="openai", model="gpt-5")
params = GenParams(
    max_tokens=1000,
    temperature=0.7,
    extra={"reasoning": True}  # OpenAI-specific
)

# OpenAI O-series reasoning model
config = ProviderConfig(provider="openai", model="o3-mini")
params = GenParams(
    max_tokens=1000,
    temperature=0.7,
)

# Anthropic with thinking tokens
config = ProviderConfig(provider="anthropic", model="claude-3-5-sonnet-20241022")
params = GenParams(
    extra={"thinking": True}  # Anthropic-specific
)

response = generate(config, prompt, params)

Note for Developers: OpenAI has changed parameter naming for newer models. GPT-5 series and O-series models (o1, o3, o4-mini) use max_completion_tokens instead of max_tokens. The library automatically handles this transition with intelligent parameter mapping and fallback mechanisms, so you can continue using the standard max_tokens parameter in GenParams - it will be automatically converted to the correct parameter for each model.

5. Custom Base URLs & Proxies

# Use OpenRouter as a proxy for OpenAI models
config = ProviderConfig(
    provider="openrouter",
    model="openai/gpt-4o",
    base_url="https://openrouter.ai/api/v1",
    api_key="your-openrouter-key"
)

6. Error Handling

from nous_llm import generate, AuthError, RateLimitError, ProviderError

try:
    response = generate(config, prompt)
except AuthError as e:
    print(f"Authentication failed: {e}")
except RateLimitError as e:
    print(f"Rate limit exceeded: {e}")
except ProviderError as e:
    print(f"Provider error: {e}")

Production Integration

FastAPI Web Service

from fastapi import FastAPI, HTTPException
from nous_llm import agenenerate, ProviderConfig, Prompt, AuthError

app = FastAPI(title="Nous LLM API")

@app.post("/generate")
async def generate_text(request: dict):
    try:
        config = ProviderConfig(**request["config"])
        prompt = Prompt(**request["prompt"])
        
        response = await agenenerate(config, prompt)
        return {
            "text": response.text, 
            "usage": response.usage,
            "provider": response.provider
        }
    except AuthError as e:
        raise HTTPException(status_code=401, detail=str(e))

AWS Lambda Function

import json
from nous_llm import LLMClient, ProviderConfig, Prompt

# Global client for connection reuse across invocations
client = LLMClient(ProviderConfig(
    provider="openai",
    model="gpt-4o-mini"
))

def lambda_handler(event, context):
    try:
        prompt = Prompt(
            instructions=event["instructions"],
            input=event["input"]
        )
        
        response = client.generate(prompt)
        
        return {
            "statusCode": 200,
            "body": json.dumps({
                "text": response.text,
                "usage": response.usage.model_dump() if response.usage else None
            })
        }
    except Exception as e:
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

Development

Project Setup

# Clone the repository
git clone https://github.com/amod-ml/nous-llm.git
cd nous-llm

# Install with development dependencies
uv sync --group dev

# Install pre-commit hooks (includes GPG validation)
./scripts/setup-gpg-hook.sh

Testing & Quality

# Run all tests
uv run pytest

# Run with coverage
uv run pytest --cov=nous_llm

# Format and lint code
uv run ruff format
uv run ruff check

# Type checking
uv run mypy src/nous_llm

Adding a New Provider

  1. Create adapter in src/nous_llm/adapters/
  2. Implement the AdapterProtocol
  3. Register in src/nous_llm/core/adapters.py
  4. Add model patterns to src/nous_llm/core/registry.py
  5. Add comprehensive tests in tests/

Examples & Resources

Complete Examples

  • ๐Ÿ“ examples/basic_usage.py - Core functionality demos
  • ๐Ÿ“ examples/fastapi_service.py - REST API service
  • ๐Ÿ“ examples/lambda_example.py - AWS Lambda function

Documentation & Support

Contributing

We welcome contributions!

Requirements

  • โœ… Python 3.12+
  • ๐Ÿ” All commits must be GPG-signed
  • ๐Ÿงช Code must pass all tests and linting
  • ๐Ÿ“‹ Follow established patterns and conventions

License

This project is licensed under the Mozilla Public License 2.0 - see the LICENSE file for details.


๐Ÿ”’ GPG signing ensures the authenticity and integrity of all code contributions.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nous_llm-0.1.4.tar.gz (126.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nous_llm-0.1.4-py3-none-any.whl (35.3 kB view details)

Uploaded Python 3

File details

Details for the file nous_llm-0.1.4.tar.gz.

File metadata

  • Download URL: nous_llm-0.1.4.tar.gz
  • Upload date:
  • Size: 126.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for nous_llm-0.1.4.tar.gz
Algorithm Hash digest
SHA256 8b51b449bb1743de095a082ea1a69f511f5b908d8bed40b686288f47a8d9b2f1
MD5 fe774547eede3051194fbe789976f464
BLAKE2b-256 3b01256a7db447bc0e4bb885052c23ff3d4a8874fcb3f4f45e87fead1449b2e2

See more details on using hashes here.

Provenance

The following attestation bundles were made for nous_llm-0.1.4.tar.gz:

Publisher: release.yml on amod-ml/nous-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file nous_llm-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: nous_llm-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 35.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for nous_llm-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 5758e1e331a167f0e5ed544ae475f7dbb0a4aee976c1655d4378cdf7ce37440d
MD5 6dd311296e36143b5b4fc044e25c45fe
BLAKE2b-256 484ae9af17dd6353d9993cc5e6810ce1e51156cd891e4a432a1f7272a6cb743a

See more details on using hashes here.

Provenance

The following attestation bundles were made for nous_llm-0.1.4-py3-none-any.whl:

Publisher: release.yml on amod-ml/nous-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page