Skip to main content

Get Pydantic models and Python types as LLM responses from Google Vertex AI models.

Project description

Modelsmith

Modelsmith is a Python library that allows you to get structured responses in the form of Pydantic models and Python types from Google's Vertex AI models.

Currently it allows you to use three classes of model:

  • ChatModel (most commonly used with chat-bison)
  • TextGenerationModel (most commonly used with text-bison)
  • GenerativeModel (most commonly used with gemini-pro)

Modelsmith allows a unified interface over all of these. It has been designed to be extensible and can adapt to other models in the future.

Notable Features

  • Structured Responses: Specify both Pydantic models and Python types as the outputs of your LLM responses.
  • Templating: Use Jinja2 templating in your prompts to allowing complex prompt logic.
  • Default and Custom Prompts: A default prompt template is provided but you can also specify your own.
  • Retry Logic: Number of retries is user configurable.
  • Validation: Outputs from the LLM are validated against your requested response model. Errors are fed back to the LLM to try and correct any validation failures.

Installation

Install Modelsmith using pip or your favourite python package manager.

pip example:

pip install modelsmith

Google Cloud Authentication

Authentication to Google Cloud is done via the Application Default Credentials flow. So make sure you have ADC configured. See Google's documentation for more details.

Getting started

Extracting a Pydantic models

Lets look at an example of extracting a Pydantic model from some text.

from modelsmith import Forge
from pydantic import BaseModel, Field
from vertexai.generative_models import GenerativeModel


# Define the pydantic model you want to receive as the response
class User(BaseModel):
    name: str = Field(description="The person's name")
    age: int = Field(description="The person's age")
    city: str = Field(description="The city where the person lives")
    country: str = Field(description="The country where the person lives")


# Create your forge instance
forge = Forge(model=GenerativeModel("gemini-1.0-pro"), response_model=User)

# Generate a User instance from the prompt
user = forge.generate("Terry Tate 60. Lives in Irvine, United States.")

print(user)  # name='Terry Tate' age=60 city='Irvine' country='United States'

Extracting a combined Pydantic and Python type

Modelsmith does not restrict you to either Pydantic models or Python types. You can combine them in the same response. Below we extract a list of Pydantic model instances.

from modelsmith import Forge
from pydantic import BaseModel, Field
from vertexai.generative_models import GenerativeModel


class City(BaseModel):
    city: str = Field(description="The name of the city")
    state: str = Field(description="2-letter abbreviation of the state")


# Pass a list of Pydantic models to the response_model argument.
forge = Forge(
    model=GenerativeModel("gemini-1.0-pro"),
    response_model=list[City],
)

response = forge.generate("I have lived in Irvine, CA and Dallas TX")


print(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]

Using the default prompt template

The previous examples use the built in prompt template in zero-shot mode. The default template also works in few-shot mode and allows you to pass in examples via the prompt_values parameter of the generate method. The default prompt template has a template variable called examples that we pass our example text to. The following example shows how this can be used.

import inspect

from modelsmith import Forge
from vertexai.generative_models import GenerativeModel

# Create your forge instance
forge = Forge(model=GenerativeModel("gemini-1.0-pro"), response_model=list[str])

# Define examples, using inspect.cleandoc to remove indentation
examples = inspect.cleandoc("""
    input: John Doe is forty years old. Lives in Alton, England
    output: ["John Doe", "40", "Alton", "England"]

    input: Sarah Green lives in London, UK. She is 32 years old.
    output: ["Sarah Green", "32", "London", "UK"]
""")

# Generate a Python list of string values from the input text
response = forge.generate(
    "Sophia Schmidt twenty three. Resident in Berlin Germany.",
    prompt_values={"examples": examples},
)

print(response)  # ['Sophia Schmidt', '23', 'Berlin', 'Germany']

Using your own prompt template

If you want to use your own prompt you can simply pass it to the prompt parameter of the Forge class. Any jinja2 template variables will be replaced with the values provided in the prompt_values parameter of the generate method.

⚠️ If using your own prompt include a jinja template variable called response_model_json to place your response model json schema in your preferred location. If response_model_json is not provided then the default response model template text will be appended to the end of your prompt.

Here is an example of using a custom prompt that includes the response_model_json template variable.

import inspect

from modelsmith import Forge
from vertexai.generative_models import GenerativeModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting numbers from user input and combing them into one number. 
    Take into account numbers written as text as well as in numerical format.

    You MUST take the types of the OUTPUT SCHEMA into account and adjust your
    provided text to fit the required types.

    Here is the OUTPUT SCHEMA:
    {{ response_model_json }}
    
""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=GenerativeModel("gemini-1.0-pro"), response_model=int, prompt=my_prompt
)

# Generate a your response
response = forge.generate("23 five seventy two")

print(response)  # 23572

The same example above would also work if the response_model_json was left out of the prompt due to this being added automatically if missing.

import inspect

from modelsmith import Forge
from vertexai.generative_models import GenerativeModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting numbers from user input and combing them into one number. 
    Take into account numbers written as text as well as in numerical format.
""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=GenerativeModel("gemini-1.0-pro"), response_model=int, prompt=my_prompt
)

# Generate a your response
response = forge.generate("23 five seventy two")

print(response)  # 23572

Placing user_input inside your prompt

By default user input is appended to the end of both custom and default prompts. Modelsmith allows you to place user input anywhere inside your custom prompt by adding the template variable {{ user_input }} where you want the user input to go.

# Create your custom prompt with user input placed at the beginning
my_prompt = inspect.cleandoc("""
    Consider the following user input: {{ user_input }}

    You are extracting numbers from user input and combing them into one number. 
    Take into account numbers written as text as well as in numerical format.
""")

Setting the number of retries

By default Modelsmith will try to get the desired response model from the LLM three times before raising an exception. On each retry the validation error is fed back to the LLM with a request to correct it.

You can change this by passing the max_retries parameter to the Forge class.

# Create your forge instance, setting the number of retries
forge = Forge(
    model=GenerativeModel("gemini-1.0-pro"), response_model=int, max_retries=2
)

Matching patterns

Modelsmith looks for JSON output in the LLM response. It uses regular expressions to identify JSON output. If for any reason you want to use a different pattern you can pass it to the match_pattern parameter of the Forge class.

Failing silently

Modelsmith will raise a ModelNotDerivedError exception if no valid response was obtained. You can change this by passing False to the raise_on_failure parameter of the Forge class.

This will suppress the exception and return None instead.

Passing prompt template variables and model settings

You can pass prompt template variables and model settings by passing them to the prompt_values and model_settings parameters of the generate method.

import inspect

from modelsmith import Forge
from vertexai.generative_models import GenerativeModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting numbers from user input and combing them into one number. 
    Take into account numbers written as text as well as in numerical format.

    {{ user_input_prefix }}
    {{ user_input }}

""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=GenerativeModel("gemini-1.0-pro"),
    response_model=int,
    prompt=my_prompt,
    max_retries=2,
)

# Custom LLM settings
model_settings = {
    "temperature": 0.8,
    "top_p": 1.0,
}

# Prompt template variable values to pass
prompt_values = {
    "user_input_prefix": "I have a the following numbers: ",
}

# Generate a your response
response = forge.generate(
    "23 five seventy two", prompt_values=prompt_values, model_settings=model_settings
)

Learn more

Have a look at the tests included in this repository for more examples.

Get in touch

If you have any questions or suggestions, feel free to open an issue or start a discussion.

License

This project is licensed under the terms of the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modelsmith-0.2.1.tar.gz (12.7 kB view details)

Uploaded Source

Built Distribution

modelsmith-0.2.1-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file modelsmith-0.2.1.tar.gz.

File metadata

  • Download URL: modelsmith-0.2.1.tar.gz
  • Upload date:
  • Size: 12.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.5 Darwin/23.4.0

File hashes

Hashes for modelsmith-0.2.1.tar.gz
Algorithm Hash digest
SHA256 7baeda67f76074b6f6119a9480134725f4251cc8954bc1defb63f145d5956140
MD5 ee76d04242531d9f9b7c229f7b7e323b
BLAKE2b-256 c4040047bafed4332e37b74c9d3eec201decf329bb553447508d3f9a2a2b9d99

See more details on using hashes here.

File details

Details for the file modelsmith-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: modelsmith-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 12.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.5 Darwin/23.4.0

File hashes

Hashes for modelsmith-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f717ed148437ac7570c720b83ef182b1412f63e2e950356a8fcc0d8f7c76de15
MD5 d144ceba098f7c5b939a43faa3a8bd1f
BLAKE2b-256 79633595d0ffac8407a51d641af1892023b7851ff98fbca5ed96981c8894a34b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page