Skip to main content

Outlines custom provider plugin for LangExtract

Project description

LangExtract Outlines Plugin

A LangExtract provider plugin that integrates Outlines for structured text extraction using constrained generation.

Overview

This plugin enables you to use Outlines models with LangExtract for structured information extraction tasks. Outlines provides constrained generation capabilities that ensure model outputs conform to specific schemas, making it ideal for reliable structured extraction.

Installation

We recommend you use uv to install the package.

uv add langextract-outlines

The command above will automatically install langextract and outlines as they are dependencies of langextract-outlines. However, it will not install the optional dependencies required to run specific models with Outlines. If you want to use the Transformers model in Outlines for instance, install the associated optional dependencies:

uv add outlines[transformers]

Quick Start

To use the langextract-outlines plugin, you must provide OutlinesProvider instance as the value of the model parameter when using the langextract.extract function. As we are directly providing a model, no need to specify a model_id.

The arguments to initialize an OutlinesProvider instance are very similar to those you would use with the outlines.Generator constructor:

  • outlines_model: an instance of an outlines.models.Model, for instance Transformers or MLXLM
  • output_type: a list of Pydantic models that will be used to constrain the generation. More information on that in a dedicated section below
  • backend: the name of the backend that will be used in Outlines to constrain the generation (outlines_core by default)
  • **inference_kwargs: the keyword arguments that will be passed on to the underlying model by Outlines. Those correspond to the argument you would provide when calling a model in Outlines

For instance:

import langextract as lx
import outlines
import transformers
from pydantic import BaseModel, Field
from langextract_outlines import OutlinesProvider

# Define your extraction prompt and examples
prompt = "Extract characters and emotions from the text."
examples = [
    lx.data.ExampleData(
        "Romeo gazed longingly at Juliet.",
        extractions=[
            lx.data.Extraction(
                extraction_class="character",
                extraction_text="Romeo",
                attributes={"emotional_state": "longing"}
            ),
            lx.data.Extraction(
                extraction_class="emotion",
                extraction_text="longingly",
                attributes={"feeling": "desire"}
            )
        ]
    )
]

# Define the associated output_type
class Character(BaseModel):
    emotional_state: str = Field(description="The emotional state of the character")

class Emotion(BaseModel):
    feeling: str = Field(description="The feeling of the emotion")

output_type = [Character, Emotion]

# Create the Outline model
model_id = "microsoft/Phi-3-mini-4k-instruct"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
model = transformers.AutoModelForCausalLM.from_pretrained(model_id)

# Create the Outlines provider
outlines_provider = OutlinesProvider(
    outlines_model=outlines.from_transformers(model, tokenizer),
    output_type=output_type,
    backend="outlines_core",
    temperature=0.5,
    repetition_penalty=1,
    max_new_tokens=100,
)

# Run extraction
result = lx.extract(
    "Juliet smiled brightly at the stars.",
    prompt_description=prompt,
    examples=examples,
    model=outlines_provider,
)

print(f"Extracted {len(result.extractions)} entities")

Output Type

The output type you provide must be compatible with the examples as the latter will be included in the prompt. In case of mismatch between the two, generation quality may be severely degraded.

The output type must be a list of Pydantic models, each of them corresponding to an Extraction type included in your examples. The name of the Pydantic model must be the name of the extraction_class in PascalCase. The fields of the model must correspond to the attributes of the extraction instance.

For instance:

import langextract as lx
from pydantic import BaseModel, Field

# Extraction included in the examples
lx.data.Extraction(
    extraction_class="character",
    extraction_text="Romeo",
    attributes={"emotional_state": "longing", "intensity": "medium"}
)

# Possible associated model included in the output_type
class Character(BaseModel):
    emotional_state: str = Field(
        description="The emotional state of the character",
        min_length=1,
        max_length=100,
    )
    intensity: Literal["low", "medium", "high"] = Field(
        description="The intensity of the emotion",
        default="medium",
    )

Inference Arguments

As explained above, all inference arguments such as temperature, max_new_tokens... must be provided as keyword arguments when initializing the OutlinesProvider. Inference arguments specified through other parts of the LangExtract interface will be ignored. Outlines does not standardize inference arguments across models, so you must make sure that the arguments you provide actually correspond to what the model you chose accepts.

License

Apache-2.0

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langextract_outlines-0.1.3.tar.gz (129.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langextract_outlines-0.1.3-py3-none-any.whl (14.4 kB view details)

Uploaded Python 3

File details

Details for the file langextract_outlines-0.1.3.tar.gz.

File metadata

  • Download URL: langextract_outlines-0.1.3.tar.gz
  • Upload date:
  • Size: 129.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.13

File hashes

Hashes for langextract_outlines-0.1.3.tar.gz
Algorithm Hash digest
SHA256 3b7eea8c989146a8dd2cad577e3433e354c832dd3aaa06f65a99e683215a3a9a
MD5 dff3f0ec932a5802bdcfa76792ecac79
BLAKE2b-256 b1ebeca54683bf2d4b9824b9975481021c6f38743496197a5b04e18224c3469d

See more details on using hashes here.

File details

Details for the file langextract_outlines-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langextract_outlines-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a664c91dd385b5eac24164d203675cca60a4d070c8d07b553b213ceed74a9166
MD5 5705fc5a6982bbf1db5a9262d994d6be
BLAKE2b-256 a4c9f419d54c8fee038afa409fcfe623e400cf1b045d4bcec8af6206a4c057ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page