Skip to main content

A unified async Python wrapper for multiple LLM providers with OpenAI Response API and reasoning support

Project description

SmartLLM

A unified async Python wrapper for multiple LLM providers with a consistent interface.

Python Version License

Features

  • Unified Interface - Single API for multiple LLM providers (OpenAI, AWS Bedrock)
  • Async/Await - Built on asyncio for high-performance concurrent requests
  • Smart Caching - Two-level cache (local + DynamoDB) to reduce costs and latency
  • Auto Retry - Exponential backoff retry logic for transient failures
  • Structured Output - Native Pydantic model support for type-safe responses
  • Streaming - Real-time streaming responses for better UX
  • Rate Limiting - Built-in concurrency control per model
  • Decorator Logging - Automatic function logging via Logorator
  • OpenAI Response API - Full support for OpenAI's primary API including reasoning models

Installation

pip install smartllm

Optional Dependencies

Install only the providers you need:

# For OpenAI
pip install smartllm[openai]

# For AWS Bedrock
pip install smartllm[bedrock]

# For all providers
pip install smartllm[all]

DynamoDB Caching (optional)

To enable shared two-level caching across machines:

async with LLMClient(provider="openai", dynamo_table_name="my-llm-cache") as client:
    ...

Requires AWS credentials with DynamoDB access. The table is auto-created if it doesn't exist. Local file cache is always used as the first layer.

Quick Start

Basic Usage

import asyncio
from smartllm import LLMClient, TextRequest

async def main():
    # Auto-detects provider from environment variables
    async with LLMClient(provider="openai") as client:
        response = await client.generate_text(
            TextRequest(prompt="What is the capital of France?")
        )
        print(response.text)

asyncio.run(main())

Multi-turn Conversations

from smartllm import LLMClient, MessageRequest, Message

async with LLMClient(provider="openai") as client:
    messages = [
        Message(role="user", content="My name is Alice."),
        Message(role="assistant", content="Nice to meet you, Alice!"),
        Message(role="user", content="What's my name?"),
    ]
    
    response = await client.send_message(
        MessageRequest(messages=messages)
    )
    print(response.text)  # "Your name is Alice."

Streaming Responses

from smartllm import LLMClient, TextRequest

async with LLMClient(provider="openai") as client:
    request = TextRequest(
        prompt="Write a short poem about Python.",
        stream=True
    )
    
    async for chunk in client.generate_text_stream(request):
        print(chunk.text, end="", flush=True)

Structured Output with Pydantic

from pydantic import BaseModel
from smartllm import LLMClient, TextRequest

class Person(BaseModel):
    name: str
    age: int
    occupation: str

async with LLMClient(provider="openai") as client:
    response = await client.generate_text(
        TextRequest(
            prompt="Generate a person profile for a software engineer named John, age 30.",
            response_format=Person
        )
    )
    
    person = response.structured_data
    print(f"{person.name} is a {person.age} year old {person.occupation}")

Configuration

Environment Variables

OpenAI:

export OPENAI_API_KEY="your-api-key"
export OPENAI_MODEL="gpt-4o-mini"  # Optional

AWS Bedrock:

export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
export BEDROCK_MODEL="anthropic.claude-3-sonnet-20240229-v1:0"  # Optional

Programmatic Configuration

from smartllm import LLMClient, LLMConfig

config = LLMConfig(
    provider="openai",
    api_key="your-api-key",
    default_model="gpt-4o",
    temperature=0.7,
    max_tokens=2048,
    max_retries=3,
)

async with LLMClient(config) as client:
    # Use client...
    pass

Customizing Defaults

from smartllm import defaults

# Modify global defaults
defaults.DEFAULT_TEMPERATURE = 0.7
defaults.DEFAULT_MAX_TOKENS = 4096
defaults.DEFAULT_MAX_RETRIES = 5

OpenAI API Types

SmartLLM supports both OpenAI APIs via the api_type parameter:

  • "responses" (default) - OpenAI's primary Response API, recommended for all modern models
  • "chat_completions" - Legacy Chat Completions API, supported indefinitely
# Response API (default)
response = await client.generate_text(
    TextRequest(prompt="Hello", api_type="responses")
)

# Chat Completions API (legacy)
response = await client.generate_text(
    TextRequest(prompt="Hello", api_type="chat_completions")
)

Reasoning Models

For models that support reasoning (e.g. GPT-5.x), use reasoning_effort to control how much the model reasons before responding. Reasoning tokens are returned in response.metadata:

response = await client.generate_text(
    TextRequest(
        prompt="Solve: what is the 100th Fibonacci number?",
        reasoning_effort="high",  # "low", "medium", or "high"
    )
)

print(response.text)
print(f"Reasoning tokens used: {response.metadata.get('reasoning_tokens', 0)}")

Note: reasoning models do not support temperature. Passing a value other than 1 will raise a ValueError.

Reasoning with Structured Output

from pydantic import BaseModel
from smartllm import LLMClient, TextRequest

class Solution(BaseModel):
    answer: float
    unit: str
    explanation: str

async with LLMClient(provider="openai") as client:
    response = await client.generate_text(
        TextRequest(
            prompt="A train leaves city A at 60mph toward city B (300 miles away). Another leaves B at 90mph. When do they meet?",
            response_format=Solution,
            reasoning_effort="medium",
        )
    )

    solution = response.structured_data
    print(f"{solution.answer} {solution.unit}: {solution.explanation}")
    print(f"Reasoning tokens: {response.metadata.get('reasoning_tokens', 0)}")

Advanced Features

Caching

Responses are automatically cached when temperature=0:

# First call - hits API
response1 = await client.generate_text(
    TextRequest(prompt="What is 2+2?", temperature=0)
)

# Second call - uses cache (instant, free)
response2 = await client.generate_text(
    TextRequest(prompt="What is 2+2?", temperature=0)
)

# Clear cache for specific request
response3 = await client.generate_text(
    TextRequest(prompt="What is 2+2?", temperature=0, clear_cache=True)
)

Concurrent Requests

import asyncio
from smartllm import LLMClient, TextRequest

async with LLMClient(provider="openai") as client:
    prompts = ["Question 1", "Question 2", "Question 3"]
    
    tasks = [
        client.generate_text(TextRequest(prompt=p))
        for p in prompts
    ]
    
    responses = await asyncio.gather(*tasks)

Rate Limiting

# Limit concurrent requests
client = LLMClient(provider="openai", max_concurrent=5)

Provider-Specific Clients

For advanced use cases, access provider-specific clients:

from smartllm.openai import OpenAILLMClient, OpenAIConfig
from smartllm.bedrock import BedrockLLMClient, BedrockConfig

# OpenAI-specific features
openai_config = OpenAIConfig(api_key="...", organization="...")
async with OpenAILLMClient(openai_config) as client:
    models = await client.list_available_models()

# Bedrock-specific features
bedrock_config = BedrockConfig(aws_region="us-east-1")
async with BedrockLLMClient(bedrock_config) as client:
    models = await client.list_available_model_ids()

Supported Providers

  • OpenAI - GPT models via OpenAI API
  • AWS Bedrock - Claude, Llama, Mistral, and Titan models

API Reference

Core Classes

  • LLMClient - Unified client for all providers
  • LLMConfig - Unified configuration
  • TextRequest - Single prompt request
  • MessageRequest - Multi-turn conversation request
  • TextResponse - LLM response with metadata
  • Message - Conversation message
  • StreamChunk - Streaming response chunk

Request Parameters

Parameter Type Description Default
prompt str Input text prompt Required
model str Model ID to use Config default
temperature float Sampling temperature (0-1) 0
max_tokens int Maximum output tokens 2048
top_p float Nucleus sampling 1.0
system_prompt str System context None
stream bool Enable streaming False
response_format BaseModel Pydantic model for structured output None
use_cache bool Enable caching True
clear_cache bool Clear cache before request False
api_type str OpenAI API type ("responses" or "chat_completions") "responses"
reasoning_effort str Reasoning effort ("low", "medium", "high") None

Error Handling

from smartllm import LLMClient, TextRequest

async with LLMClient(provider="openai") as client:
    try:
        response = await client.generate_text(
            TextRequest(prompt="Hello")
        )
    except ValueError as e:
        print(f"Configuration error: {e}")
    except Exception as e:
        print(f"API error: {e}")

Development

Setup

git clone https://github.com/Redundando/smartllm.git
cd smartllm
pip install -e .[all,dev]

Running Tests

# Unit tests
pytest tests/unit/ -v

# Integration tests (select model interactively)
pytest tests/integration/

# Integration tests with a specific model
pytest tests/integration/ --model gpt-4o

# Integration tests with a reasoning model
pytest tests/integration/ --model gpt-5.2

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Changelog

Version 0.1.5

  • Replaced custom logging with Logorator decorator-based logging
  • Added two-level cache: local JSON files + optional DynamoDB via Dynamorator
  • DynamoDB cache configurable via dynamo_table_name and cache_ttl_days (default: 365 days)
  • Cache write-back: DynamoDB hits are written to local cache automatically
  • Prompt stored in cache metadata
  • Recursive Pydantic schema cleaning for OpenAI structured output compatibility
  • logorator and dynamorator added as core dependencies in pyproject.toml

Version 0.1.4

  • Fixed logger name from aws_llm_wrapper to smartllm
  • Removed redundant response_format=json_object when using tool-based structured output
  • Cache read failures now log a warning instead of silently returning None
  • Added reasoning_effort warning when used with Bedrock models
  • Test suite now supports model selection via --model CLI option or interactive prompt
  • Integration tests support both OpenAI and AWS Bedrock models
  • Bedrock streaming chunk parsing fixed for Claude models

Version 0.1.0

  • Initial public release
  • Unified interface for multiple providers
  • OpenAI support (GPT models)
  • AWS Bedrock support (Claude, Llama, Mistral, Titan)
  • Async/await architecture
  • Smart caching with temperature=0
  • Auto retry with exponential backoff
  • Structured output with Pydantic models
  • Streaming responses
  • Rate limiting and concurrency control
  • OpenAI Response API support (primary interface)
  • Reasoning model support with reasoning_effort parameter
  • Comprehensive test suite

Support

Acknowledgments

Built with:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smartllm-0.1.6.tar.gz (63.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

smartllm-0.1.6-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file smartllm-0.1.6.tar.gz.

File metadata

  • Download URL: smartllm-0.1.6.tar.gz
  • Upload date:
  • Size: 63.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for smartllm-0.1.6.tar.gz
Algorithm Hash digest
SHA256 36894928f7555f7026c1c74166b36e022e6df0ebe1fa84ff68f374b567a623c7
MD5 180b38ea0c88443df5257cf18ec39a41
BLAKE2b-256 520a21c5833e870df34bac55a8fb4aba12e2d0a7ee8215ff640bbbe8e4cfed76

See more details on using hashes here.

File details

Details for the file smartllm-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: smartllm-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for smartllm-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 11b8d9843572258cfb7c5854622dc28fb40c8c74c764c423474a0447f7aadad2
MD5 1b9d9bb6abfec792f0f116a310b53458
BLAKE2b-256 a4967205645202643646db824da8c37f9dea3db1ce8495a4a62537ba53ab662b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page