A Simple Unified LLM API interface for OpenAI, Gemini, Mistral, Groq and more.
Project description
๐ pyllmlib
pyllmlib is a lightweight, provider-agnostic Python package that gives you one simple interface to work with multiple Large Language Model (LLM) APIs.
Stop juggling different SDKs and client libraries โ whether itโs OpenAI GPT, Google Gemini, Mistral AI, or Groq, you write your code once and switch providers in seconds.
๐ฏ Ideal for developers who want to:
- Experiment with different LLMs quickly
- Build multi-provider AI applications
- Avoid vendor lock-in with a consistent API
โจ Features
- ๐ Unified API โ A single
generate()function for all providers - ๐ Multi-Provider Support โ OpenAI, Gemini, Mistral, Groq (with more coming soon)
- ๐ง Consistent Message Format โ Same request style across providers
- ๐ Flexible Config โ Use env vars, inline setup, or config files
- ๐ฆ Minimal Dependencies โ Only needs
requests - ๐ Quick Provider Switching โ Change models with one line
- ๐ก๏ธ Automatic Token Handling โ Prevents overflows & context errors
- ๐ Role-Based Conversations โ System, user, assistant messages
- ๐ง Extensible โ Add your own providers with minimal code
- ๐ No Vendor Lock-In โ Swap providers without rewriting logic
๐ฆ Installation
From PyPI (recommended):
pip install pyllmlib
From GitHub:
# Latest release
pip install git+https://github.com/yazirofi/pyllmlib.git
# Development version
git clone https://github.com/yazirofi/pyllmlib.git
cd pyllmlib
pip install -e .
Requirements:
- Python 3.7+
requests(installed automatically)
๐ Quick Start
from pyllmlib import config, generate
# Configure your preferred LLM
config(
provider="openai",
api_key="your-openai-api-key",
model="gpt-4"
)
# Generate text
response = generate("Explain quantum computing in simple terms")
print(response)
โ Same code works with any provider โ just change the config.
โ๏ธ Configuration
1. Direct in Code
from pyllmlib import config
# OpenAI
config(provider="openai", api_key="sk-...", model="gpt-4")
# Google Gemini
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
# Mistral
config(provider="mistral", api_key="...", model="mistral-large-latest")
# Groq
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")
2. Environment Variables
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4
LLM_BASE_URL=https://api.openai.com/v1 # Optional
from pyllmlib import config, generate
config() # Loads from env
print(generate("What is LLM?"))
๐ฌ Usage Examples
Text Generation
from pyllmlib import config, generate
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
print(generate("What is the capital of France?"))
prompt = """
Write a Python function to calculate factorial with error handling and docstring.
"""
print(generate(prompt))
Interactive Chat
from pyllmlib import config, chat, reset_chat
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
while q := input("Ask: "):
print(chat(q))
reset_chat()
Style Generated Output Response
from pyllmlib import config, chat, reset_chat, style, generate
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
style(generate("Write the wish in one line")) # use the **style** function instead of **print** function
while q := input("Ask: "):
style(chat(q)) # use the **style** function instead of **print** function
reset_chat()
๐ Supported Providers
โ Currently Supported
- OpenAI โ
gpt-4,gpt-4-turbo,gpt-3.5-turbo - Google Gemini โ
gemini-2.5-flash,gemini-1.5-flash - Mistral AI โ
mistral-large-latest,mistral-small-latest - Groq โ
mixtral-8x7b-32768,llama2-70b-4096,gemma-7b-it
๐ Coming Soon
- Anthropic Claude
- Cohere
- Ollama & LM Studio (local hosting)
- Hugging Face models
๐ Troubleshooting
- Auth Errors โ Check API key format
- Model Not Found โ Verify model name is correct
๐ Best Practices
# โ
Reuse config for multiple prompts
config(provider="openai", api_key="sk-...", model="gpt-4")
for p in prompts:
print(generate(p))
# โ Donโt reconfigure on every request
๐ก Cost Optimization: Use gpt-3.5-turbo for simple tasks, gpt-4 for complex ones.
๐ API Reference
config(**kwargs)โ Set provider, API key, modelgenerate(prompt, **kwargs)โ Single text outputgenerate_stream(prompt, **kwargs)โ Streaming outputchat(message)โ Conversational interfacereset_chat()โ Clear conversation history
๐ License
Licensed under the MIT License โ see LICENSE.
๐จ๐ป Author
Shay Yazirofi
- GitHub: @yazirofi
- Email: yazirofi@gmail.com
โญ If you find pyllmlib useful, please star the repo on GitHub! ๐ More docs & tutorials: Wiki
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyllmlib-0.1.1.tar.gz.
File metadata
- Download URL: pyllmlib-0.1.1.tar.gz
- Upload date:
- Size: 10.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ffa0821d5184a9ac842c7118af88127892369bf112ada2420ea66060193f6cfc
|
|
| MD5 |
906919a2bee5918bb0ea73246c2383af
|
|
| BLAKE2b-256 |
c7ee32cd9f5888f7629ef8da63a9d17b96f9f4fa40bee0c5f9484b52877432fc
|
File details
Details for the file pyllmlib-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pyllmlib-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
490f47063e9da6711438c5cadd464b36d40ad140c652401f33d957f30e72d745
|
|
| MD5 |
b075dc195161e260bc701f3c03dd012a
|
|
| BLAKE2b-256 |
c4d2672e46de9d636af6cb555740b8a6440c25e1f3adca69a9594833ccd7564e
|