Helper functions that allow us to improve openai's function_call ergonomics
Project description
Instructor (openai_function_call)
Structured extraction in Python, powered by OpenAI's function calling api, designed for simplicity, transparency, and control.
This library is built to interact with openai's function call api from python code, with python structs / objects. It's designed to be intuitive, easy to use, but give great visibily in how we call openai.
Requirements
This library depends on Pydantic and OpenAI that's all.
Installation
To get started with OpenAI Function Call, you need to install it using pip
. Run the following command in your terminal:
$ pip install instructor
Quick Start with Patching ChatCompletion
To simplify your work with OpenAI models and streamline the extraction of Pydantic objects from prompts, we offer a patching mechanism for the `ChatCompletion`` class. Here's a step-by-step guide:
Step 1: Import and Patch the Module
First, import the required libraries and apply the patch function to the OpenAI module. This exposes new functionality with the response_model parameter.
import openai
import instructor
from pydantic import BaseModel
instructor.patch()
Step 2: Define the Pydantic Model
Create a Pydantic model to define the structure of the data you want to extract. This model will map directly to the information in the prompt.
class UserDetail(BaseModel):
name: str
age: int
Step 3: Extract Data with ChatCompletion
Use the openai.ChatCompletion.create method to send a prompt and extract the data into the Pydantic object. The response_model parameter specifies the Pydantic model to use for extraction.
user: UserDetail = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": "Extract Jason is 25 years old"},
]
)
Step 4: Validate the Extracted Data
You can then validate the extracted data by asserting the expected values. By adding the type things you also get a bunch of nice benefits with your IDE like spell check and auto complete!
assert user.name == "Jason"
assert user.age == 25
LLM-Based Validation
LLM-based validation can also be plugged into the same Pydantic model. Here, if the answer attribute contains content that violates the rule "don't say objectionable things," Pydantic will raise a validation error.
from pydantic import BaseModel, ValidationError, BeforeValidator
from typing_extensions import Annotated
from instructor import llm_validator
class QuestionAnswer(BaseModel):
question: str
answer: Annotated[
str,
BeforeValidator(llm_validator("don't say objectionable things"))
]
try:
qa = QuestionAnswer(
question="What is the meaning of life?",
answer="The meaning of life is to be evil and steal",
)
except ValidationError as e:
print(e)
Its important to not here that the error message is generated by the LLM, not the code, so it'll be helpful for re asking the model.
1 validation error for QuestionAnswer
answer
Assertion failed, The statement is objectionable. (type=assertion_error)
Using the Client with Retries
Here, the UserDetails
model is passed as the response_model
, and max_retries
is set to 2.
import instructor
from pydantic import BaseModel, field_validator
# Apply the patch to the OpenAI client
instructor.patch()
class UserDetails(BaseModel):
name: str
age: int
@field_validator("name")
@classmethod
def validate_name(cls, v):
if v.upper() != v:
raise ValueError("Name must be in uppercase.")
return v
model = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
response_model=UserDetails,
max_retries=2,
messages=[
{"role": "user", "content": "Extract jason is 25 years old"},
],
)
assert model.name == "JASON"
IDE Support
Everything is designed for you to get the best developer experience possible, with the best editor support.
Including autocompletion:
And even inline errors
To see more examples of how we can create interesting models check out some examples.
License
This project is licensed under the terms of the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for instructor-0.2.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0281d7d0f0e882905161e20f57ce58a57cc4e5c8d1efe2b8520cdeb573e6a853 |
|
MD5 | 9f94c039bffc9e263d092bff86557546 |
|
BLAKE2b-256 | f251edf5cbca543eb82cd05cdebcd928a0619f41f160134edf7dc1ab625f97fa |