Skip to main content

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

llm-factory

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.


Features

  • ✅ Unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini, Llama)
  • ✅ Easy provider switching via configuration
  • ✅ Pydantic-based response validation
  • ✅ Environment variable-based secure configuration
  • ✅ Extensible for new providers
  • ✅ Supports model, temperature, max tokens, and retries per provider

Installation

pip install python-llm-factory

Configuration

The package uses environment variables for authentication and configuration. You can set these in a .env file or your environment:

# Required environment variables for each provider
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key

Examples

Basic Usage: Creating a Completion

from pydantic import BaseModel, Field
from python_llm_factory import LLMFactory
from python_llm_factory import Settings


class CompletionModel(BaseModel):
    response: str = Field(description="Your response to the user.")
    reasoning: str = Field(description="Explain your reasoning for the response.")


messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "If it takes 2 hours to dry 1 shirt out in the sun, how long will it take to dry 5 shirts?"},
]

llm = LLMFactory(
    settings=Settings().gemini.gemini_2_5_flash,
)
completion = llm.completions_create(
    response_model=CompletionModel,
    messages=messages,
)
print(f"Response: {completion.response}\n")
print(f"Reasoning: {completion.reasoning}")

🤝 Contributing

If you have a helpful tool, pattern, or improvement to suggest:

  • Fork the repo
  • Create a new branch
  • Submit a pull request
    I welcome additions that promote clean, productive, and maintainable development.

🙏 Thanks

Thanks for exploring this repository!
Happy coding!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_llm_factory-0.0.4.tar.gz (8.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_llm_factory-0.0.4-py3-none-any.whl (10.3 kB view details)

Uploaded Python 3

File details

Details for the file python_llm_factory-0.0.4.tar.gz.

File metadata

  • Download URL: python_llm_factory-0.0.4.tar.gz
  • Upload date:
  • Size: 8.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for python_llm_factory-0.0.4.tar.gz
Algorithm Hash digest
SHA256 87ea5da46576f2f81930b44e6c22c5fd79b31b6824b67510a49bdb89415f178f
MD5 be50243db2575b4678e82aba93af6a33
BLAKE2b-256 c09ca33b4f7c161c1f4341488817cb60181d80297e44bbc5c40b4da3d1dd0548

See more details on using hashes here.

File details

Details for the file python_llm_factory-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for python_llm_factory-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 df84ec598bed420db1f785121129c3a93744b4fec5004de7bdb84d721ea6deb9
MD5 aab3e7b38cbeaef3b3447df4d5725377
BLAKE2b-256 a677359378a6f91752b53813cd6bcff93fc5992a9b0eb50bff361d7ed48d41dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page