Skip to main content

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

llm-factory

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.


Features

  • ✅ Unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini, Llama)
  • ✅ Easy provider switching via configuration
  • ✅ Pydantic-based response validation
  • ✅ Environment variable-based secure configuration
  • ✅ Extensible for new providers
  • ✅ Supports model, temperature, max tokens, and retries per provider

Installation

pip install python-llm-factory

Configuration

The package uses environment variables for authentication and configuration. You can set these in a .env file or your environment:

# Required environment variables for each provider
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key

Examples

Basic Usage: Creating a Completion

from pydantic import BaseModel, Field
from python_llm_factory import LLMFactory
from python_llm_factory import Settings


class CompletionModel(BaseModel):
    response: str = Field(description="Your response to the user.")
    reasoning: str = Field(description="Explain your reasoning for the response.")


messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "If it takes 2 hours to dry 1 shirt out in the sun, how long will it take to dry 5 shirts?"},
]

llm = LLMFactory(
    settings=Settings().gemini.gemini_2_5_flash,
)
completion = llm.completions_create(
    response_model=CompletionModel,
    messages=messages,
)
print(f"Response: {completion.response}\n")
print(f"Reasoning: {completion.reasoning}")

🤝 Contributing

If you have a helpful tool, pattern, or improvement to suggest:

  • Fork the repo
  • Create a new branch
  • Submit a pull request
    I welcome additions that promote clean, productive, and maintainable development.

🙏 Thanks

Thanks for exploring this repository!
Happy coding!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_llm_factory-0.0.5.tar.gz (8.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_llm_factory-0.0.5-py3-none-any.whl (10.3 kB view details)

Uploaded Python 3

File details

Details for the file python_llm_factory-0.0.5.tar.gz.

File metadata

  • Download URL: python_llm_factory-0.0.5.tar.gz
  • Upload date:
  • Size: 8.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for python_llm_factory-0.0.5.tar.gz
Algorithm Hash digest
SHA256 491f569aac818a1e81c0ded7111c193196ddf1ba5ad7f1aedde34359f170d8ef
MD5 82f10ad322abe1fd56a543845f347270
BLAKE2b-256 aeffaa1d4f147e64ef4e702e622481a28defc52efe12c86a62d0d485d35c1c49

See more details on using hashes here.

File details

Details for the file python_llm_factory-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for python_llm_factory-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 f950e2096fe037dc7d364b7303152dfcf182542232a57b958a9e41da081f77f8
MD5 7bd69b2fb2bdd6b58ca70f80dd1bf135
BLAKE2b-256 a9fd7b0b8b116e096ed0aaa73c753ebad4428d1caad917fa92d5a08e0a17627c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page