LLMHandler is a unified Python package that provides a single, consistent interface for interacting with multiple LLM providers, offering both structured (typed) and unstructured responses.
Project description
LLMHandler
Unified LLM Interface with Typed & Unstructured Responses
LLMHandler is a Python package that provides a single, consistent interface to interact with multiple large language model (LLM) providers such as OpenAI, Anthropic, Gemini, DeepSeek, VertexAI, and OpenRouter. Whether you need validated, structured responses (using Pydantic models) or unstructured free-form text, LLMHandler makes it simple to integrate LLM capabilities into your projects.
Table of Contents
- Overview
- Features
- Installation
- Configuration
- Quick Start
- API Reference
- Testing
- Development & Contribution
- License
- Contact
Overview
LLMHandler unifies access to various LLM providers by letting you specify a model using a provider prefix (e.g. openai:gpt-4o-mini). The package automatically appends JSON schema instructions when a Pydantic model is provided to validate and parse responses. Alternatively, you can request unstructured free-form text. Additional features include batch processing and optional rate limiting.
Features
-
Multi-Provider Support:
Easily switch between providers (OpenAI, Anthropic, Gemini, DeepSeek, VertexAI, OpenRouter) using a simple model identifier. -
Structured & Unstructured Responses:
Validate responses against Pydantic models (e.g.SimpleResponse,PersonResponse) or receive raw text. -
Batch Processing:
Process multiple prompts in batch mode with results written to JSONL files. -
Rate Limiting:
Optionally control the number of requests per minute. -
Easy Configuration:
Load API keys and other settings automatically from a.envfile.
Installation
Requirements
- Python 3.13 or later
Using PDM
Install dependencies with PDM:
pdm install
Using Pip (once published to PyPI)
pip install llmhandler
Configuration
Create a .env file in your project’s root to store your API keys:
# .env
OPENROUTER_API_KEY=your_openrouter_api_key
DEEPSEEK_API_KEY=your_deepseek_api_key
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
LLMHandler automatically loads these values at runtime.
Quick Start
Below is a simple example demonstrating both structured and unstructured usage:
import asyncio
from llmhandler.api_handler import UnifiedLLMHandler
from llmhandler._internal_models import SimpleResponse, PersonResponse
async def main():
# Initialize the handler with your API key (or let it load from .env)
handler = UnifiedLLMHandler(openai_api_key="your_openai_api_key")
# Structured response (typed output)
structured = await handler.process(
prompts="Generate a catchy marketing slogan for a coffee brand.",
model="openai:gpt-4o-mini",
response_type=SimpleResponse
)
print("Structured result:", structured)
# Unstructured response (raw text output)
unstructured = await handler.process(
prompts="Tell me a fun fact about dolphins.",
model="openai:gpt-4o-mini"
)
print("Unstructured result:", unstructured)
# Multiple prompts with structured responses
multiple = await handler.process(
prompts=[
"Describe a 28-year-old engineer named Alice with 3 key skills.",
"Describe a 45-year-old pastry chef named Bob with 2 key skills."
],
model="openai:gpt-4o-mini",
response_type=PersonResponse
)
print("Multiple structured results:", multiple)
asyncio.run(main())
For additional examples, see the examples/inference_test.py file.
API Reference
UnifiedLLMHandler
The primary class for processing prompts.
Constructor Parameters
-
API Keys:
openai_api_key,openrouter_api_key,deepseek_api_key,anthropic_api_key,gemini_api_key
(These can be provided as arguments or loaded automatically from the.envfile.) -
requests_per_minute(optional):
Set a rate limit for outgoing requests. -
batch_output_dir(optional):
Directory for saving batch output files (default:"batch_output").
Method: process()
-
Parameters:
prompts: A single prompt (string) or a list of prompt strings.model: Model identifier with a provider prefix (e.g.,"openai:gpt-4o-mini").response_type(optional): A Pydantic model class for structured responses (e.g.SimpleResponse,PersonResponse). Omit or set toNonefor unstructured text.system_message(optional): Additional instructions for the system prompt.batch_mode(optional): Set toTrueto process multiple prompts in batch mode (supported only for structured responses using OpenAI models).retries(optional): Number of retry attempts for failed requests.
-
Returns:
AUnifiedResponseobject (when using a typed response) or a raw string (or list of strings) for unstructured output.
Testing
A comprehensive test suite is included. To run tests, simply execute:
pytest
Development & Contribution
Contributions are welcome! To set up the development environment:
-
Clone the Repository:
git clone https://github.com/yourusername/LLMHandler.git cd LLMHandler
-
Install Dependencies:
pdm install -
Run Tests:
pytest
-
Submit a Pull Request:
Make improvements or bug fixes and submit a PR.
License
This project is licensed under the MIT License.
Contact
For questions, feedback, or contributions, please reach out to:
Bryan Nsoh
Email: bryan.anye.5@gmail.com
Happy coding with LLMHandler!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_handler_validator-0.1.0.tar.gz.
File metadata
- Download URL: llm_handler_validator-0.1.0.tar.gz
- Upload date:
- Size: 12.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.22.3 CPython/3.13.1 Windows/11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62053f39408dba9bf590d4a640c3536eb4750eef025b15b663ace7ecf29b1a14
|
|
| MD5 |
273cfa1ace9ccc2a54c05b52baadeea4
|
|
| BLAKE2b-256 |
23e42031aaead556aeeaf48e6fb83a1c5b4ae6c07acacefb7565c56a31574581
|
File details
Details for the file llm_handler_validator-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llm_handler_validator-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.22.3 CPython/3.13.1 Windows/11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2015720602a63e36de8bbce120338b87415fba3ca94dab987410eb3236935565
|
|
| MD5 |
ed41034045302bde380ba9316202d68f
|
|
| BLAKE2b-256 |
df24160ae8f5102905c4843716e2590612e36a2aa7f65dd785881d97da286658
|