Skip to main content

Parallel inference calls to LLM APIs using Polars dataframes with Pydantic-based structured outputs

Project description

Polar Llama

Logo

Overview

Polar Llama is a Python library designed to enhance the efficiency of making parallel inference calls to the ChatGPT API using the Polars dataframe tool. This library enables users to manage multiple API requests simultaneously, significantly speeding up the process compared to serial request handling.

Key Features

  • Parallel Inference: Send multiple inference requests in parallel to the ChatGPT API without waiting for each individual request to complete.
  • Integration with Polars: Utilizes the Polars dataframe for organizing and handling requests, leveraging its efficient data processing capabilities.
  • Easy to Use: Simplifies the process of sending queries and retrieving responses from the ChatGPT API through a clean and straightforward interface.
  • Multi-Message Support: Create and process conversations with multiple messages in context, supporting complex multi-turn interactions.
  • Multiple Provider Support: Works with OpenAI, Anthropic, Gemini, Groq, and AWS Bedrock models, giving you flexibility in your AI infrastructure.
  • Structured Outputs: Define response schemas using Pydantic models for type-safe, validated LLM outputs returned as Polars Structs with direct field access.

Installation

To install Polar Llama, you can use pip:

pip install polar-llama

Alternatively, for development purposes, you can install from source:

maturin develop

Example Usage

Here's how you can use Polar Llama to send multiple inference requests in parallel:

import polars as pl
from polar_llama import string_to_message, inference_async, Provider
import dotenv

dotenv.load_dotenv()

# Example questions
questions = [
    'What is the capital of France?',
    'What is the difference between polars and pandas?'
]

# Creating a dataframe with questions
df = pl.DataFrame({'Questions': questions})

# Adding prompts to the dataframe
df = df.with_columns(
    prompt=string_to_message("Questions", message_type='user')
)

# Sending parallel inference requests
df = df.with_columns(
    answer=inference_async('prompt', provider = Provider.OPENAI, model = 'gpt-4o-mini')
)

Multi-Message Conversations

Polar Llama now supports multi-message conversations, allowing you to maintain context across multiple turns:

import polars as pl
from polar_llama import string_to_message, combine_messages, inference_messages
import dotenv

dotenv.load_dotenv()

# Create a dataframe with system prompts and user questions
df = pl.DataFrame({
    "system_prompt": [
        "You are a helpful assistant.",
        "You are a math expert."
    ],
    "user_question": [
        "What's the weather like today?",
        "Solve x^2 + 5x + 6 = 0"
    ]
})

# Convert to structured messages
df = df.with_columns([
    pl.col("system_prompt").invoke("string_to_message", message_type="system").alias("system_message"),
    pl.col("user_question").invoke("string_to_message", message_type="user").alias("user_message")
])

# Combine into conversations
df = df.with_columns(
    pl.invoke("combine_messages", pl.col("system_message"), pl.col("user_message")).alias("conversation")
)

# Send to model and get responses
df = df.with_columns(
    pl.col("conversation").invoke("inference_messages", provider="openai", model="gpt-4").alias("response")
)

AWS Bedrock Support

Polar Llama now supports AWS Bedrock models. To use Bedrock, ensure you have AWS credentials configured (via AWS CLI, environment variables, or IAM roles):

import polars as pl
from polar_llama import string_to_message, inference_async
import dotenv

dotenv.load_dotenv()

# Example questions
questions = [
    'What is the capital of France?',
    'Explain quantum computing in simple terms.'
]

# Creating a dataframe with questions
df = pl.DataFrame({'Questions': questions})

# Adding prompts to the dataframe
df = df.with_columns(
    prompt=string_to_message("Questions", message_type='user')
)

# Using AWS Bedrock with Claude model
df = df.with_columns(
    answer=inference_async('prompt', provider='bedrock', model='anthropic.claude-3-haiku-20240307-v1:0')
)

Structured Outputs with Pydantic

Polar Llama supports structured outputs using Pydantic models. Define your response schema as a Pydantic BaseModel, and the LLM will return validated, type-safe data as a Polars Struct:

import polars as pl
from polar_llama import inference_async, Provider
from pydantic import BaseModel

# Define your response schema
class MovieRecommendation(BaseModel):
    title: str
    genre: str
    year: int
    reason: str

# Create a dataframe
df = pl.DataFrame({
    'prompt': ['Recommend a great sci-fi movie from the 2010s']
})

# Get structured output
df = df.with_columns(
    recommendation=inference_async(
        pl.col('prompt'),
        provider=Provider.OPENAI,
        model='gpt-4o-mini',
        response_model=MovieRecommendation
    )
)

# Access struct fields directly!
print(df['recommendation'].struct.field('title')[0])  # "Interstellar"
print(df['recommendation'].struct.field('year')[0])   # 2014

Key Features:

  • Type Safety: Responses are validated against your Pydantic schema
  • Direct Field Access: Use .struct.field('field_name') to access individual fields
  • Error Handling: Built-in _error, _details, and _raw fields for graceful error handling
  • Works Everywhere: Compatible with inference_async(), inference(), and inference_messages()
  • Multi-Provider: Works with OpenAI, Anthropic, Groq, Gemini, and Bedrock

Error Handling:

# Check for errors in responses
error = df['recommendation'].struct.field('_error')[0]
if error:
    print(f"Error: {error}")
    print(f"Details: {df['recommendation'].struct.field('_details')[0]}")
    print(f"Raw response: {df['recommendation'].struct.field('_raw')[0]}")

Benefits

  • Speed: Processes multiple queries in parallel, drastically reducing the time required for bulk query handling.
  • Scalability: Scales efficiently with the increase in number of queries, ideal for high-demand applications.
  • Ease of Integration: Integrates seamlessly into existing Python projects that utilize Polars, making it easy to add parallel processing capabilities.
  • Context Preservation: Maintain conversation context with multi-message support for more natural interactions.
  • Provider Flexibility: Choose from multiple LLM providers based on your needs and access.
  • Type Safety: Get validated, structured outputs using Pydantic schemas for reliable data extraction.

Testing

Polar Llama includes a comprehensive test suite that validates parallel execution, provider support, and core functionality.

Setup:

  1. Copy .env.example to .env and add your API keys:

    cp .env.example .env
    # Edit .env and add your provider API keys
    
  2. Install test dependencies:

    pip install -r tests/requirements.txt
    

Run Python tests:

pytest tests/ -v

Run Rust tests:

cargo test --test model_client_tests -- --nocapture

Tests automatically detect configured providers and only run tests for those with valid API keys. See tests/README.md for detailed testing documentation.

Contributing

We welcome contributions to Polar Llama! If you're interested in improving the library or adding new features, please feel free to fork the repository and submit a pull request.

License

Polar Llama is released under the MIT license. For more details, see the LICENSE file in the repository.

Roadmap

  • Multi-Message Support: Support for multi-message conversations to maintain context.
  • Multiple Provider Support: Support for different LLM providers (OpenAI, Anthropic, Gemini, Groq, AWS Bedrock).
  • Structured Data Outputs: Add support for structured data outputs using Pydantic models with type validation and Polars Struct returns.
  • Streaming Responses: Support for streaming responses from LLM providers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

polar_llama-0.2.1.tar.gz (208.3 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

polar_llama-0.2.1-cp38-abi3-win_amd64.whl (10.3 MB view details)

Uploaded CPython 3.8+Windows x86-64

polar_llama-0.2.1-cp38-abi3-manylinux_2_39_x86_64.whl (12.6 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.39+ x86-64

polar_llama-0.2.1-cp38-abi3-macosx_11_0_arm64.whl (10.9 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

polar_llama-0.2.1-cp38-abi3-macosx_10_12_x86_64.whl (11.3 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file polar_llama-0.2.1.tar.gz.

File metadata

  • Download URL: polar_llama-0.2.1.tar.gz
  • Upload date:
  • Size: 208.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.10.2

File hashes

Hashes for polar_llama-0.2.1.tar.gz
Algorithm Hash digest
SHA256 34e625261ad1e8c4498ace4996448bf713c9da251b67d7e7da6ba66687e5bd1a
MD5 bcdde16e4d1052d9384f54a054204d90
BLAKE2b-256 495c5b702696695f209810e7aa0f2e255fa0afdc036422d67a4bfba96a156691

See more details on using hashes here.

File details

Details for the file polar_llama-0.2.1-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for polar_llama-0.2.1-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 201540c544f35eef6a3a41ad6ecd240846aa7cb0b6df4c62279331eadf3c60f7
MD5 14d8b2280883c5d0525078a2ec2fdd3a
BLAKE2b-256 3e946d74552fdf67114776669611191b46f20b72c09821ba23ae7be15397629c

See more details on using hashes here.

File details

Details for the file polar_llama-0.2.1-cp38-abi3-manylinux_2_39_x86_64.whl.

File metadata

File hashes

Hashes for polar_llama-0.2.1-cp38-abi3-manylinux_2_39_x86_64.whl
Algorithm Hash digest
SHA256 148332fd718b13e10fc16eb90eca64af456623d32bfbf1191211f60a737495fc
MD5 9fc5f07f418f711e31ac8517eeded428
BLAKE2b-256 5b6f729784511f034d6b18d1963eb32d7037f9df316a6c41aba452c52e8a4285

See more details on using hashes here.

File details

Details for the file polar_llama-0.2.1-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for polar_llama-0.2.1-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 a3c31710e332b4e69d6e8c24af8a1de4af69d7644abf3b4222c7a08ddcbc7481
MD5 ecec8ed2a7b29fbfb7e9d4cc4f97dfee
BLAKE2b-256 6a281bd4fbe21cb86a50408a41f8e8826c94b94938677417f52b5b82ea0958e0

See more details on using hashes here.

File details

Details for the file polar_llama-0.2.1-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for polar_llama-0.2.1-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 fe0c9dfb1dd059f7f954158c1dfac15992b179d8f4e0cf16503c276b478acab3
MD5 fabab3d893e0991a665a2a2f20b758d9
BLAKE2b-256 c834884bb6d1d40666d9828e9093addb4208bbe82885140dd70c06cb42338f06

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page