Skip to main content

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

llm-factory

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.


Features

  • ✅ Unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini, Llama)
  • ✅ Easy provider switching via configuration
  • ✅ Pydantic-based response validation
  • ✅ Environment variable-based secure configuration
  • ✅ Extensible for new providers
  • ✅ Supports model, temperature, max tokens, and retries per provider

Installation

pip install -r requirements.txt

Configuration

The package uses environment variables for authentication and configuration. You can set these in a .env file or your environment:

# Required environment variables for each provider
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key

Examples

Basic Usage: Creating a Completion

from pydantic import BaseModel, Field
from python_llm_factory.llm_factory import LLMProvider, LLMFactory


class CompletionModel(BaseModel):
    response: str = Field(description="Your response to the user.")
    reasoning: str = Field(description="Explain your reasoning for the response.")


messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user",
     "content": "If it takes 2 hours to dry 1 shirt out in the sun, how long will it take to dry 5 shirts?"},
]

llm = LLMFactory(provider=LLMProvider.GEMINI)
completion = llm.create_completion(
    response_model=CompletionModel,
    messages=messages,
)
print(f"Response: {completion.response}\n")
print(f"Reasoning: {completion.reasoning}")

🤝 Contributing

If you have a helpful tool, pattern, or improvement to suggest:

  • Fork the repo
  • Create a new branch
  • Submit a pull request
    I welcome additions that promote clean, productive, and maintainable development.

🙏 Thanks

Thanks for exploring this repository!
Happy coding!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_llm_factory-0.0.1.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_llm_factory-0.0.1-py3-none-any.whl (5.6 kB view details)

Uploaded Python 3

File details

Details for the file python_llm_factory-0.0.1.tar.gz.

File metadata

  • Download URL: python_llm_factory-0.0.1.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for python_llm_factory-0.0.1.tar.gz
Algorithm Hash digest
SHA256 0ead3d636611ae2d454e1b9edc71ce384f96f62bd3f270e56f3bf5b220ee3bca
MD5 6154668af46117d4cf5c680b576f4c07
BLAKE2b-256 9839fb45c2a9f23f49a902ffa258e3d1f332ac59b346b994c816a64984b49fd1

See more details on using hashes here.

File details

Details for the file python_llm_factory-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for python_llm_factory-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b683f010faf80e79dc75b38a53621229b8e2ce9145c26774ec6a4e7434cd5cd0
MD5 4e00e0bf011516887047b4719b3cc587
BLAKE2b-256 53d86e8cdebb8dc56fc762eef77fa7b6cf50a0163af950f245fa5471aab46932

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page