Skip to main content

A Simple Unified LLM API interface for OpenAI, Gemini, Mistral, Groq and more.

Project description

๐Ÿ”Œ pyllmlib

PyPI Downloads PyPI Version Python Version License GitHub Stars


Cropped Image

pyllmlib is a lightweight, provider-agnostic Python package that gives you one simple interface to work with multiple Large Language Model (LLM) APIs.

Stop juggling different SDKs and client libraries โ€” whether itโ€™s OpenAI GPT, Google Gemini, Mistral AI, or Groq, you write your code once and switch providers in seconds.

๐ŸŽฏ Ideal for developers who want to:

  • Experiment with different LLMs quickly
  • Build multi-provider AI applications
  • Avoid vendor lock-in with a consistent API

โœจ Features

  • ๐Ÿ”Œ Unified API โ€” A single generate() function for all providers
  • ๐ŸŒ Multi-Provider Support โ€” OpenAI, Gemini, Mistral, Groq (with more coming soon)
  • ๐Ÿง  Consistent Message Format โ€” Same request style across providers
  • ๐Ÿ” Flexible Config โ€” Use env vars, inline setup, or config files
  • ๐Ÿ“ฆ Minimal Dependencies โ€” Only needs requests
  • ๐Ÿ”„ Quick Provider Switching โ€” Change models with one line
  • ๐Ÿ›ก๏ธ Automatic Token Handling โ€” Prevents overflows & context errors
  • ๐Ÿ“œ Role-Based Conversations โ€” System, user, assistant messages
  • ๐Ÿ”ง Extensible โ€” Add your own providers with minimal code
  • ๐Ÿš€ No Vendor Lock-In โ€” Swap providers without rewriting logic

๐Ÿ“ฆ Installation

From PyPI (recommended):

pip install pyllmlib

From GitHub:

# Latest release
pip install git+https://github.com/yazirofi/pyllmlib.git

# Development version
git clone https://github.com/yazirofi/pyllmlib.git
cd pyllmlib
pip install -e .

Requirements:

  • Python 3.7+
  • requests (installed automatically)

๐Ÿš€ Quick Start

from pyllmlib import config, generate

# Configure your preferred LLM
config(
    provider="openai",
    api_key="your-openai-api-key",
    model="gpt-4"
)

# Generate text
response = generate("Explain quantum computing in simple terms")
print(response)

โœ… Same code works with any provider โ€” just change the config.


โš™๏ธ Configuration

1. Direct in Code

from pyllmlib import config

# OpenAI
config(provider="openai", api_key="sk-...", model="gpt-4")

# Google Gemini
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

# Mistral
config(provider="mistral", api_key="...", model="mistral-large-latest")

# Groq
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")

2. Environment Variables

LLM_PROVIDER=openai
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4
LLM_BASE_URL=https://api.openai.com/v1  # Optional
from pyllmlib import config, generate

config()  # Loads from env
print(generate("What is LLM?"))

๐Ÿ’ฌ Usage Examples

Text Generation

from pyllmlib import config, generate

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

print(generate("What is the capital of France?"))

prompt = """
Write a Python function to calculate factorial with error handling and docstring.
"""
print(generate(prompt))

Interactive Chat

from pyllmlib import config, chat, reset_chat

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

while q := input("Ask: "):
    print(chat(q))

reset_chat()

Style Generated Output Response

from pyllmlib import config, chat, reset_chat, style, generate

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

style(generate("Write the wish in one line"))  # use the **style** function instead of **print** function

while q := input("Ask: "):
    style(chat(q))  # use the **style** function instead of **print** function
    
reset_chat()

๐ŸŒ Supported Providers

โœ… Currently Supported

  • OpenAI โ€” gpt-4, gpt-4-turbo, gpt-3.5-turbo
  • Google Gemini โ€” gemini-2.5-flash, gemini-1.5-flash
  • Mistral AI โ€” mistral-large-latest, mistral-small-latest
  • Groq โ€” mixtral-8x7b-32768, llama2-70b-4096, gemma-7b-it

๐Ÿ”œ Coming Soon

  • Anthropic Claude
  • Cohere
  • Ollama & LM Studio (local hosting)
  • Hugging Face models

๐Ÿ› Troubleshooting

  • Auth Errors โ†’ Check API key format
  • Model Not Found โ†’ Verify model name is correct

๐Ÿ“Š Best Practices

# โœ… Reuse config for multiple prompts
config(provider="openai", api_key="sk-...", model="gpt-4")
for p in prompts:
    print(generate(p))

# โŒ Donโ€™t reconfigure on every request

๐Ÿ’ก Cost Optimization: Use gpt-3.5-turbo for simple tasks, gpt-4 for complex ones.


๐Ÿ“š API Reference

  • config(**kwargs) โ†’ Set provider, API key, model
  • generate(prompt, **kwargs) โ†’ Single text output
  • generate_stream(prompt, **kwargs) โ†’ Streaming output
  • chat(message) โ†’ Conversational interface
  • reset_chat() โ†’ Clear conversation history

๐Ÿ“„ License

Licensed under the MIT License โ€“ see LICENSE.


๐Ÿ‘จ๐Ÿ’ป Author

Shay Yazirofi


โญ If you find pyllmlib useful, please star the repo on GitHub! ๐Ÿ“– More docs & tutorials: Wiki


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyllmlib-0.1.1.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyllmlib-0.1.1-py3-none-any.whl (10.6 kB view details)

Uploaded Python 3

File details

Details for the file pyllmlib-0.1.1.tar.gz.

File metadata

  • Download URL: pyllmlib-0.1.1.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for pyllmlib-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ffa0821d5184a9ac842c7118af88127892369bf112ada2420ea66060193f6cfc
MD5 906919a2bee5918bb0ea73246c2383af
BLAKE2b-256 c7ee32cd9f5888f7629ef8da63a9d17b96f9f4fa40bee0c5f9484b52877432fc

See more details on using hashes here.

File details

Details for the file pyllmlib-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: pyllmlib-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 10.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for pyllmlib-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 490f47063e9da6711438c5cadd464b36d40ad140c652401f33d957f30e72d745
MD5 b075dc195161e260bc701f3c03dd012a
BLAKE2b-256 c4d2672e46de9d636af6cb555740b8a6440c25e1f3adca69a9594833ccd7564e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page