Skip to main content

A minimal, fast, and type-safe Python library for LLM chat completions with OpenAI and Azure OpenAI support

Project description

llmify

A lightweight, type-safe Python library for LLM chat completions. Inspired by LangChain's message API but simpler and less opinionated.

Features:

  • 🎯 Simple, intuitive API for OpenAI and Azure OpenAI
  • 📝 Type-safe structured outputs with Pydantic
  • 🛠️ Built-in tool calling support
  • 🌊 Async streaming
  • 🖼️ Image analysis support
  • ⚡ Minimal dependencies, maximum flexibility

Installation

pip install py-llmify

Quick Start

import asyncio
from llmify import ChatOpenAI, UserMessage, SystemMessage

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    response = await llm.invoke([
        SystemMessage("You are a helpful assistant"),
        UserMessage("What is 2+2?")
    ])

    print(response)  # "2+2 equals 4"

asyncio.run(main())

Core Features

Message Types

llmify provides LangChain-style message types for clean conversation management:

from llmify import SystemMessage, UserMessage, AssistantMessage, ImageMessage

messages = [
    SystemMessage("You are a Python expert"),
    UserMessage("How do I read a file?"),
    AssistantMessage("You can use open() with a context manager"),
    UserMessage("Show me an example")
]

Structured Outputs

Get type-safe, validated responses using Pydantic models:

from pydantic import BaseModel
from llmify import ChatOpenAI, UserMessage

class Person(BaseModel):
    name: str
    age: int
    occupation: str

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    structured_llm = llm.with_structured_output(Person)
    person = await structured_llm.invoke([
        UserMessage("Extract: John is 32 and works as a data scientist")
    ])

    print(f"{person.name}, {person.age}, {person.occupation}")
    # Output: John, 32, data scientist

asyncio.run(main())

Tool Calling

Define tools using simple Python functions with the @tool decorator:

from llmify import ChatOpenAI, UserMessage, ToolResultMessage, tool

@tool
def get_weather(location: str, unit: str = "celsius") -> str:
    """Get current weather for a location"""
    return f"Weather in {location}: 22°{unit[0].upper()}, Sunny"

@tool
def search_web(query: str, max_results: int = 5) -> str:
    """Search the web"""
    return f"Found {max_results} results for '{query}'"

async def main():
    llm = ChatOpenAI(model="gpt-4o")
    tools = [get_weather, search_web]

    # Initial request
    messages = [UserMessage("What's the weather in Paris?")]
    response = await llm.invoke(messages, tools=tools)

    # Handle tool calls
    if response.has_tool_calls:
        messages.append(response.to_message())

        for tool_call in response.tool_calls:
            # Execute the tool
            result = tool_call.execute()

            # Add result to conversation
            messages.append(ToolResultMessage(
                tool_call_id=tool_call.id,
                content=result
            ))

        # Get final response
        final = await llm.invoke(messages, tools=tools)
        print(final.content)

asyncio.run(main())

Key Points:

  • Type hints are automatically converted to JSON schema
  • Tools are just decorated Python functions
  • Built-in tool execution with .execute()

Streaming

Stream responses token-by-token as they're generated:

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    async for chunk in llm.stream([
        UserMessage("Write a haiku about Python")
    ]):
        print(chunk, end="", flush=True)

asyncio.run(main())

Image Analysis

Analyze images using vision models:

import base64
from llmify import ChatOpenAI, ImageMessage

async def main():
    llm = ChatOpenAI(model="gpt-4o")

    # Load and encode image
    with open("photo.jpg", "rb") as f:
        image_data = base64.b64encode(f.read()).decode('utf-8')

    response = await llm.invoke([
        ImageMessage(
            base64_data=image_data,
            media_type="image/jpeg",
            text="What's in this image?"
        )
    ])

    print(response)

asyncio.run(main())

Configuration

Environment Variables

# OpenAI
export OPENAI_API_KEY="sk-..."

# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/"

Model Parameters

Set defaults when initializing or override per request:

# Set defaults
llm = ChatOpenAI(
    model="gpt-4o",
    temperature=0.7,
    max_tokens=1000
)

# Override per request
response = await llm.invoke(
    messages=[UserMessage("Hi")],
    temperature=0.2,  # More deterministic
    max_tokens=500
)

Supported Parameters:

  • temperature - Creativity (0-2)
  • max_tokens - Maximum response length
  • top_p - Nucleus sampling
  • frequency_penalty - Reduce repetition
  • presence_penalty - Encourage diversity
  • stop - Stop sequences
  • seed - Deterministic outputs

Providers

OpenAI

from llmify import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    api_key="sk-..."  # Optional if using env var
)

Azure OpenAI

from llmify import ChatAzureOpenAI

llm = ChatAzureOpenAI(
    model="gpt-4o",
    api_key="...",  # Optional if using env var
    azure_endpoint="https://.openai.azure.com/"  # Optional if using env var
)

Design Philosophy

LangChain-Inspired, but Simpler

  • Familiar message API (SystemMessage, UserMessage)
  • Same interface across providers
  • Less opinionated, more flexible

Lightweight & Focused

  • Thin wrapper around official SDKs
  • Minimal dependencies
  • No unnecessary abstractions

Type-Safe & Modern

  • Full type hints for IDE support
  • Pydantic for validation
  • Async-first design

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_llmify-0.1.1.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_llmify-0.1.1-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file py_llmify-0.1.1.tar.gz.

File metadata

  • Download URL: py_llmify-0.1.1.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for py_llmify-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a32d86ead2baa31906eedb67b8c7a7d23e91a9d2b0db5039cd3270a4ac14a515
MD5 4ee2a87221f2309fb6b7730aea575902
BLAKE2b-256 9d61b0944bc467a05228014e3be1cfe0209793e1383d6a1d115171263e80c94f

See more details on using hashes here.

File details

Details for the file py_llmify-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: py_llmify-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 12.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for py_llmify-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 26f3b9c3b049ea72f2567dc3457f58e48fe89e2a34c3cd065263bc2edc84fc49
MD5 5557cb20be2af84cca6bcff5e5ff4a9c
BLAKE2b-256 2c06e5736ca6a4279c1d83620271b38b130214303c9ae1ecf683cb4fc04d8819

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page