A minimal, fast, and type-safe Python library for LLM chat completions with OpenAI and Azure OpenAI support
Project description
llmify
A lightweight, type-safe Python library for LLM chat completions. Inspired by LangChain's message API but simpler and less opinionated.
Features:
- 🎯 Simple, intuitive API for OpenAI and Azure OpenAI
- 📝 Type-safe structured outputs with Pydantic
- 🛠️ Built-in tool calling support
- 🌊 Async streaming
- 🖼️ Image analysis support
- ⚡ Minimal dependencies, maximum flexibility
Installation
pip install py-llmify
Quick Start
import asyncio
from llmify import ChatOpenAI, UserMessage, SystemMessage
async def main():
llm = ChatOpenAI(model="gpt-4o")
response = await llm.invoke([
SystemMessage("You are a helpful assistant"),
UserMessage("What is 2+2?")
])
print(response) # "2+2 equals 4"
asyncio.run(main())
Core Features
Message Types
llmify provides LangChain-style message types for clean conversation management:
from llmify import SystemMessage, UserMessage, AssistantMessage, ImageMessage
messages = [
SystemMessage("You are a Python expert"),
UserMessage("How do I read a file?"),
AssistantMessage("You can use open() with a context manager"),
UserMessage("Show me an example")
]
Structured Outputs
Get type-safe, validated responses using Pydantic models:
from pydantic import BaseModel
from llmify import ChatOpenAI, UserMessage
class Person(BaseModel):
name: str
age: int
occupation: str
async def main():
llm = ChatOpenAI(model="gpt-4o")
structured_llm = llm.with_structured_output(Person)
person = await structured_llm.invoke([
UserMessage("Extract: John is 32 and works as a data scientist")
])
print(f"{person.name}, {person.age}, {person.occupation}")
# Output: John, 32, data scientist
asyncio.run(main())
Tool Calling
Define tools using simple Python functions with the @tool decorator:
from llmify import ChatOpenAI, UserMessage, ToolResultMessage, tool
@tool
def get_weather(location: str, unit: str = "celsius") -> str:
"""Get current weather for a location"""
return f"Weather in {location}: 22°{unit[0].upper()}, Sunny"
@tool
def search_web(query: str, max_results: int = 5) -> str:
"""Search the web"""
return f"Found {max_results} results for '{query}'"
async def main():
llm = ChatOpenAI(model="gpt-4o")
tools = [get_weather, search_web]
# Initial request
messages = [UserMessage("What's the weather in Paris?")]
response = await llm.invoke(messages, tools=tools)
# Handle tool calls
if response.has_tool_calls:
messages.append(response.to_message())
for tool_call in response.tool_calls:
# Execute the tool
result = tool_call.execute()
# Add result to conversation
messages.append(ToolResultMessage(
tool_call_id=tool_call.id,
content=result
))
# Get final response
final = await llm.invoke(messages, tools=tools)
print(final.content)
asyncio.run(main())
Key Points:
- Type hints are automatically converted to JSON schema
- Tools are just decorated Python functions
- Built-in tool execution with
.execute()
Streaming
Stream responses token-by-token as they're generated:
async def main():
llm = ChatOpenAI(model="gpt-4o")
async for chunk in llm.stream([
UserMessage("Write a haiku about Python")
]):
print(chunk, end="", flush=True)
asyncio.run(main())
Image Analysis
Analyze images using vision models:
import base64
from llmify import ChatOpenAI, ImageMessage
async def main():
llm = ChatOpenAI(model="gpt-4o")
# Load and encode image
with open("photo.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode('utf-8')
response = await llm.invoke([
ImageMessage(
base64_data=image_data,
media_type="image/jpeg",
text="What's in this image?"
)
])
print(response)
asyncio.run(main())
Configuration
Environment Variables
# OpenAI
export OPENAI_API_KEY="sk-..."
# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/"
Model Parameters
Set defaults when initializing or override per request:
# Set defaults
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.7,
max_tokens=1000
)
# Override per request
response = await llm.invoke(
messages=[UserMessage("Hi")],
temperature=0.2, # More deterministic
max_tokens=500
)
Supported Parameters:
temperature- Creativity (0-2)max_tokens- Maximum response lengthtop_p- Nucleus samplingfrequency_penalty- Reduce repetitionpresence_penalty- Encourage diversitystop- Stop sequencesseed- Deterministic outputs
Providers
OpenAI
from llmify import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key="sk-..." # Optional if using env var
)
Azure OpenAI
from llmify import ChatAzureOpenAI
llm = ChatAzureOpenAI(
model="gpt-4o",
api_key="...", # Optional if using env var
azure_endpoint="https://.openai.azure.com/" # Optional if using env var
)
Design Philosophy
LangChain-Inspired, but Simpler
- Familiar message API (
SystemMessage,UserMessage) - Same interface across providers
- Less opinionated, more flexible
Lightweight & Focused
- Thin wrapper around official SDKs
- Minimal dependencies
- No unnecessary abstractions
Type-Safe & Modern
- Full type hints for IDE support
- Pydantic for validation
- Async-first design
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file py_llmify-0.1.3.tar.gz.
File metadata
- Download URL: py_llmify-0.1.3.tar.gz
- Upload date:
- Size: 12.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ffdce0806786e33b408bd8afb27bab1bafab566337ea8f16c7628e6e12effbe1
|
|
| MD5 |
6e92fcb59715528500f4732eaff52a96
|
|
| BLAKE2b-256 |
63b2d625b529ea8adadff040a0a2eda8fa7067a72dfe8dd92841bdee8c87a80e
|
File details
Details for the file py_llmify-0.1.3-py3-none-any.whl.
File metadata
- Download URL: py_llmify-0.1.3-py3-none-any.whl
- Upload date:
- Size: 12.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
284b266254bbfcece5797f9de474c88fd5c1e6695949debab65ca93b0f139f5d
|
|
| MD5 |
b4539bf26d531789626ab51c346b24d6
|
|
| BLAKE2b-256 |
75218e8993d8d23ba79711ed58ae315d05229b7861944b0fe7d8cec1b6192972
|