Skip to main content

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

llm-factory

A flexible Python factory for working with multiple Large Language Model (LLM) providers (OpenAI, Anthropic, Gemini, Llama) using a unified interface, with robust configuration and extensibility.


Features

  • ✅ Unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini, Llama)
  • ✅ Easy provider switching via configuration
  • ✅ Pydantic-based response validation
  • ✅ Environment variable-based secure configuration
  • ✅ Extensible for new providers
  • ✅ Supports model, temperature, max tokens, and retries per provider

Installation

pip install python-llm-factory

Configuration

The package uses environment variables for authentication and configuration. You can set these in a .env file or your environment:

# Required environment variables for each provider
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key

Examples

Basic Usage: Creating a Completion

from pydantic import BaseModel, Field
from python_llm_factory.llm_factory import LLMProvider, LLMFactory


class CompletionModel(BaseModel):
    response: str = Field(description="Your response to the user.")
    reasoning: str = Field(description="Explain your reasoning for the response.")


messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user",
     "content": "If it takes 2 hours to dry 1 shirt out in the sun, how long will it take to dry 5 shirts?"},
]

llm = LLMFactory(provider=LLMProvider.GEMINI)
completion = llm.create_completion(
    response_model=CompletionModel,
    messages=messages,
)
print(f"Response: {completion.response}\n")
print(f"Reasoning: {completion.reasoning}")

🤝 Contributing

If you have a helpful tool, pattern, or improvement to suggest:

  • Fork the repo
  • Create a new branch
  • Submit a pull request
    I welcome additions that promote clean, productive, and maintainable development.

🙏 Thanks

Thanks for exploring this repository!
Happy coding!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_llm_factory-0.0.3.tar.gz (4.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_llm_factory-0.0.3-py3-none-any.whl (5.6 kB view details)

Uploaded Python 3

File details

Details for the file python_llm_factory-0.0.3.tar.gz.

File metadata

  • Download URL: python_llm_factory-0.0.3.tar.gz
  • Upload date:
  • Size: 4.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for python_llm_factory-0.0.3.tar.gz
Algorithm Hash digest
SHA256 789e69c4aaac9ef2b7ee2668095907e841207df15225a192cbd32ee5cfac0fe7
MD5 209d3102082bc641bd3f48756d9c6edb
BLAKE2b-256 8f3b091b7ab944da787e19db5050ae362d74a5490ff21faad72bb79022d1a7b5

See more details on using hashes here.

File details

Details for the file python_llm_factory-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for python_llm_factory-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 56bb2342e737ca4a87cac81260b3fa832a72677dd2a4f71f7637b0549ce034ec
MD5 c60e2bbe66cf9ce7b5e7db8b813d3a5a
BLAKE2b-256 fbf7f391577fc3c884ed118afa229e545ad188afd0a0ad3245218bbccafb6b23

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page