Skip to main content

A minimal, fast, and type-safe Python library for LLM chat completions with OpenAI and Azure OpenAI support

Project description

llmify

A lightweight, type-safe Python library for LLM chat completions. Inspired by LangChain's message API but simpler and less opinionated.

Features:

  • 🎯 Simple, intuitive API for OpenAI and Azure OpenAI
  • 📝 Type-safe structured outputs with Pydantic
  • 🛠️ Built-in tool calling support
  • 🌊 Async streaming
  • 🖼️ Image analysis support
  • ⚡ Minimal dependencies, maximum flexibility

Installation

pip install py-llmify

Quick Start

import asyncio
from llmify import ChatOpenAI, UserMessage, SystemMessage

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    response = await llm.invoke([
        SystemMessage("You are a helpful assistant"),
        UserMessage("What is 2+2?")
    ])

    print(response)  # "2+2 equals 4"

asyncio.run(main())

Core Features

Message Types

llmify provides LangChain-style message types for clean conversation management:

from llmify import SystemMessage, UserMessage, AssistantMessage, ImageMessage

messages = [
    SystemMessage("You are a Python expert"),
    UserMessage("How do I read a file?"),
    AssistantMessage("You can use open() with a context manager"),
    UserMessage("Show me an example")
]

Structured Outputs

Get type-safe, validated responses using Pydantic models:

from pydantic import BaseModel
from llmify import ChatOpenAI, UserMessage

class Person(BaseModel):
    name: str
    age: int
    occupation: str

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    structured_llm = llm.with_structured_output(Person)
    person = await structured_llm.invoke([
        UserMessage("Extract: John is 32 and works as a data scientist")
    ])

    print(f"{person.name}, {person.age}, {person.occupation}")
    # Output: John, 32, data scientist

asyncio.run(main())

Tool Calling

Define tools using simple Python functions with the @tool decorator:

from llmify import ChatOpenAI, UserMessage, ToolResultMessage, tool

@tool
def get_weather(location: str, unit: str = "celsius") -> str:
    """Get current weather for a location"""
    return f"Weather in {location}: 22°{unit[0].upper()}, Sunny"

@tool
def search_web(query: str, max_results: int = 5) -> str:
    """Search the web"""
    return f"Found {max_results} results for '{query}'"

async def main():
    llm = ChatOpenAI(model="gpt-4o")
    tools = [get_weather, search_web]

    # Initial request
    messages = [UserMessage("What's the weather in Paris?")]
    response = await llm.invoke(messages, tools=tools)

    # Handle tool calls
    if response.has_tool_calls:
        messages.append(response.to_message())

        for tool_call in response.tool_calls:
            # Execute the tool
            result = tool_call.execute()

            # Add result to conversation
            messages.append(ToolResultMessage(
                tool_call_id=tool_call.id,
                content=result
            ))

        # Get final response
        final = await llm.invoke(messages, tools=tools)
        print(final.content)

asyncio.run(main())

Key Points:

  • Type hints are automatically converted to JSON schema
  • Tools are just decorated Python functions
  • Built-in tool execution with .execute()

Streaming

Stream responses token-by-token as they're generated:

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    async for chunk in llm.stream([
        UserMessage("Write a haiku about Python")
    ]):
        print(chunk, end="", flush=True)

asyncio.run(main())

Image Analysis

Analyze images using vision models:

import base64
from llmify import ChatOpenAI, ImageMessage

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    # Load and encode image
    with open("photo.jpg", "rb") as f:
        image_data = base64.b64encode(f.read()).decode('utf-8')

    response = await llm.invoke([
        ImageMessage(
            base64_data=image_data,
            media_type="image/jpeg",
            text="What's in this image?"
        )
    ])

    print(response)

asyncio.run(main())

Configuration

Environment Variables

# OpenAI
export OPENAI_API_KEY="sk-..."

# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/"

Model Parameters

Set defaults when initializing or override per request:

# Set defaults
llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0.7,
    max_tokens=1000
)

# Override per request
response = await llm.invoke(
    messages=[UserMessage("Hi")],
    temperature=0.2,  # More deterministic
    max_tokens=500
)

Supported Parameters:

  • temperature - Creativity (0-2)
  • max_tokens - Maximum response length
  • top_p - Nucleus sampling
  • frequency_penalty - Reduce repetition
  • presence_penalty - Encourage diversity
  • stop - Stop sequences
  • seed - Deterministic outputs

Providers

OpenAI

from llmify import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    api_key="sk-..."  # Optional if using env var
)

Azure OpenAI

from llmify import ChatAzureOpenAI

llm = ChatAzureOpenAI(
    model="gpt-4o",
    api_key="...",  # Optional if using env var
    azure_endpoint="https://.openai.azure.com/"  # Optional if using env var
)

Design Philosophy

LangChain-Inspired, but Simpler

  • Familiar message API (SystemMessage, UserMessage)
  • Same interface across providers
  • Less opinionated, more flexible

Lightweight & Focused

  • Thin wrapper around official SDKs
  • Minimal dependencies
  • No unnecessary abstractions

Type-Safe & Modern

  • Full type hints for IDE support
  • Pydantic for validation
  • Async-first design

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_llmify-0.1.4.tar.gz (12.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_llmify-0.1.4-py3-none-any.whl (12.6 kB view details)

Uploaded Python 3

File details

Details for the file py_llmify-0.1.4.tar.gz.

File metadata

  • Download URL: py_llmify-0.1.4.tar.gz
  • Upload date:
  • Size: 12.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for py_llmify-0.1.4.tar.gz
Algorithm Hash digest
SHA256 9c5cdbb259fb7c3dec3180b486b3930a1eedf9491af44fdc0019653522c11c6e
MD5 c6c55e271720fa23ae6205df0bec2cb3
BLAKE2b-256 fa3d456435ffee8ba373deb0966cd17c91933865802ea6919cf77d314371b4e0

See more details on using hashes here.

File details

Details for the file py_llmify-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: py_llmify-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 12.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for py_llmify-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d6185acc90fa903da5b1e98bb929897fc4853ae3adeaf6d69cc0c4218fee49e4
MD5 39b722d6303a13df307eabf79921830d
BLAKE2b-256 d04f0143bc5d355e6c73613594df38fcd0044389c9b70d22bb8bcce56ccf1565

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page