Skip to main content

Universal Python library for Structured Outputs with any LLM provider

Project description

structllm

Logo

PyPI version Python Support License: MIT

structllm is a universal and lightweight Python library that provides Structured Outputs functionality for any LLM provider (OpenAI, Anthropic, Mistral, local models, etc.), not just OpenAI. It guarantees that LLM responses conform to your provided JSON schema using Pydantic models.

If your LLM model has 7B parameters or more, it can be used with structllm.

Installation

pip install structllm

Or using uv (recommended):

uv add structllm

Quick Start

from pydantic import BaseModel
from structllm import StructLLM
from typing import List

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: List[str]

client = StructLLM(
    api_base="https://openrouter.ai/api/v1",
    api_key="sk-or-v1-...",
)

messages = [
    {"role": "system", "content": "Extract the event information."},
    {"role": "user", "content": "Alice and Bob are going to a science fair on Friday."},
]

response = client.parse(
    model="openrouter/moonshotai/kimi-k2",
    messages=messages,
    response_format=CalendarEvent,
)

if response.output_parsed:
    print(response.output_parsed)
    # {"name": "science fair", "date": "Friday", "participants": ["Alice", "Bob"]}
else:
    print("Failed to parse structured output")

Provider Support

StructLLM works with 100+ LLM providers through LiteLLM. Check the LiteLLM documentation for the full list of supported providers.

Advanced Usage

Complex Data Structures

from pydantic import BaseModel, Field
from typing import List, Optional
from enum import Enum

class Priority(str, Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"

class Task(BaseModel):
    title: str = Field(description="The task title")
    description: Optional[str] = Field(default=None, description="Task description")
    priority: Priority = Field(description="Task priority level")
    assignees: List[str] = Field(description="List of assigned people")
    due_date: Optional[str] = Field(default=None, description="Due date in YYYY-MM-DD format")

client = StructLLM(
    api_base="https://openrouter.ai/api/v1",
    api_key="sk-or-v1-...",
)

response = client.parse(
    model="gpt-4o-2024-08-06",
    messages=[
        {
            "role": "user",
            "content": "Create a high-priority task for John and Sarah to review the quarterly report by next Friday."
        }
    ],
    response_format=Task,
)

task = response.output_parsed
print(f"Task: {task.title}")
print(f"Priority: {task.priority}")
print(f"Assignees: {task.assignees}")

Error Handling

response = client.parse(
    model="gpt-4o-2024-08-06",
    messages=messages,
    response_format=CalendarEvent,
)

if response.output_parsed:
    # Successfully parsed
    event = response.output_parsed
    print(f"Parsed event: {event}")
else:
    # Parsing failed, but raw response is available
    print("Failed to parse structured output")
    print(f"Raw response: {response.raw_response.choices[0].message.content}")

Custom Configuration

client = StructLLM(
    api_base="https://api.custom-provider.com/v1",
    api_key="your-api-key"
)

response = client.parse(
    model="custom/model-name",
    messages=messages,
    response_format=YourModel,
    temperature=0.1,
    top_p=0.1,
    max_tokens=1000,
    # Any additional parameters supported by the LiteLLM interface
    custom_parameter="value"
)

How It Works

StructLLM uses prompt engineering to ensure structured outputs:

  1. Schema Injection: Automatically injects your Pydantic model's JSON schema into the system prompt
  2. Format Instructions: Adds specific instructions for JSON-only responses
  3. Intelligent Parsing: Extracts JSON from responses even when wrapped in additional text
  4. Validation: Uses Pydantic for robust type checking and validation
  5. Fallback Handling: Gracefully handles parsing failures while preserving raw responses

By default it uses low temperature and top_p settings to ensure consistent outputs, but you can customize these parameters as needed.

Testing

Run the test suite:

# Install dependencies
uv sync

# Run tests
uv run pytest
uv run pytest -m "not integration"

# Run integration tests (requires external services)
uv run pytest -m "integration"

# Run linting
uv run ruff check .

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes with tests
  4. Run the test suite: uv run pytest
  5. Run linting: uv run ruff check .
  6. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • LiteLLM for providing the universal LLM interface
  • Pydantic for structured data validation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

structllm-0.1.0.tar.gz (233.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

structllm-0.1.0-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file structllm-0.1.0.tar.gz.

File metadata

  • Download URL: structllm-0.1.0.tar.gz
  • Upload date:
  • Size: 233.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.13

File hashes

Hashes for structllm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1c91af6bf2745f709e0bb1cce52da5bf3e27ff9882d54381cbbcaf8c4fdaeffd
MD5 6ae983ac74b382da3d14a38e95da1e4b
BLAKE2b-256 16ed722355ea5cb6406e54d41cf2148dad6db8e7702b255464a94e433c7fe8f4

See more details on using hashes here.

File details

Details for the file structllm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: structllm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.13

File hashes

Hashes for structllm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0c6cf3e2d9589eb03101095c798d94f8e508ed72fef4e288c1c8605cc6833eb1
MD5 558960119aae9032640f92bcc7ff51ec
BLAKE2b-256 3bf3120e580ca90092b4c75ad9e458fa986001fce430bd3d3f882cc85142ce11

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page