Skip to main content

A simple Python library for parsing LLM prompts.

Project description

Prompt Parser

GitHub LICENSE Mounthly Download latest version supported python version

A Python library for parsing, formatting, and managing prompts for Large Language Models (LLMs).

prompt-parser simplifies the process of working with LLM prompts by providing a structured way to define, load, and manipulate prompts. It's designed to handle prompts with attributes (like temperature, model, etc.) and different message roles (system, user, assistant). It is inspired by the Prompt file format of Humanloop.

Key Features

  • Parse Prompts: Load prompts from strings or files, supporting YAML frontmatter for attributes and tagged sections for different message roles.
  • Structured Prompt Representation: Uses Prompt and PromptAttributes classes to represent prompts and their settings in an organized way.
  • Attribute Management: Easily access and manage prompt attributes like temperature, top_p, model, and custom parameters.
  • Safe Attribute Access: Provides a .get() method to access attributes with default values and .attribute_forced properties to ensure required attributes are present.
  • Prompt Formatting: Format prompt messages (system, user, assistant, tools) using variables, with support for partial formatting (handling missing variables gracefully).
  • Serialization: Convert Prompt objects back into formatted strings for saving or further use.
  • Tool Support: Handle input tools (tool calls within assistant content) and output tools (tool responses) with the PromptTool class.
  • Tool Management: Parse, format, and store tool responses dynamically using input_tools and output_tools.

Installation

You can install prompt-parser using pip:

pip install prompt-parser

Usage

Parsing a prompt from a string

The core of prompt-parser is the Prompt class. You can parse a prompt string using the Prompt.parse() method. The prompt string should follow a specific format:

YAML Frontmatter (Optional): Prompt attributes like temperature, model, top_p, etc., can be defined in YAML format between --- delimiters at the beginning of the string.

Tagged Message Sections (Optional): Use <system>, <user>, and <assistant> tags to define the content for each message role.

Here's an example prompt string and how to parse it:

from prompt_parser import Prompt

prompt_string = """
---
temperature: 0.5
top_p: 0.5
top_k: 50
max_tokens: 4096
provider: openai
model: gpt-4
endpoint: chat
tools: [{
    "name": "get_weather",
    "description": "Fetches the weather in the given location",
    "strict": true,
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The location to get the weather for"
            },
            "unit": {
                "type": ["string", "null"],
                "description": "The unit to return the temperature in",
                "enum": ["F", "C"]
            }
        },
        "additionalProperties": false,
        "required": [
            "location", "unit"
        ]
    }
}]
unknown: blablah # custom attributes are also allowed
---

<system>
You are a helpful assistant.
</system>

<user>
Hello, I have a question: {query}
</user>

<assistant>
  <tool name="get_weather" id="call_1ZUCTfyeDnpqiZbIwpF6fLGt">
  {
    "location": "New York",
    "unit": "C"
  }
  </tool>
</assistant>
"""

prompt = Prompt.parse(prompt_string)

print(prompt) # Print the parsed prompt object in string format

Parsing a Prompt from a file

You can also parse prompts directly from a file using Prompt.parse_from_file(path). The file should have the same format as the prompt string described above.

from prompt_parser import Prompt

# Assuming you have a file named 'task.md' in the same directory
prompt_from_file = Prompt.parse_from_file("task.md")

print(prompt_from_file)

Accessing Prompt Components

Once you have parsed a Prompt object, you can access its components:

  • Message Roles (system, user, assistant):
    prompt.system      # Returns the system message string, or None if not present
    prompt.user        # Returns the user message string, or None if not present
    prompt.assistant   # Returns the assistant message string, or None if not present
    
    You can also use the *_forced properties to access these messages, which will raise an AssertionError if the message is not defined. This is useful when you expect a certain message role to always be present.
    prompt.system_forced     # Returns the system message string, raises AssertionError if system message is missing
    prompt.user_forced       # Returns the user message string, raises AssertionError if user message is missing
    prompt.assistant_forced  # Returns the assistant message string, raises AssertionError if assistant message is missing
    
  • Prompt Attributes: Prompt attributes defined in the YAML frontmatter are accessible through the prompt.attributes object, which is an instance of PromptAttributes.
    prompt.attributes.temperature  # Access attribute using dot notation
    prompt.attributes['model']      # Access attribute using dictionary-like notation
    
    Safe Attribute Access with .get(): To safely access attributes and provide a default value if an attribute is not present, use the .get(key, default) method:
    temperature = prompt.attributes.get('temperature')       # Returns temperature value or None if not defined
    top_k = prompt.attributes.get('top_k', 50)             # Returns top_k value or 50 if not defined
    unknown_attribute = prompt.attributes.get('unknown', "default_value") # Returns "default_value" if 'unknown' is not defined
    
  • Prompt Tools:
    prompt.input_tools # Returns a list of InputPromptTool objects representing tool calls within the assistant message
    prompt.output_tools # Returns a list of OutputPromptTool objects representing tool responses outside the assistant message
    
    Each PromptTool object has id, name, and content attributes:

Formatting Prompt Messages

You can format the system, user, assistant, and tools messages by providing keyword arguments to the format_*() methods. This is useful for injecting dynamic content into your prompts.

  • Formatting User Message:
    formatted_user_prompt = prompt.format_user(query="What is the weather in London?")
    print(formatted_user_prompt) # Output: Hello, I have a question: What is the weather in London?
    
    Similarly, you can use prompt.format_system() and prompt.format_assistant() for the system and assistant messages respectively.
  • Formatting Tools: If your prompt includes a tools attribute (defined as a JSON structure in the frontmatter), you can format it using prompt.attributes.format_tools():
    formatted_tools = prompt.attributes.format_tools(location="London", unit="C")
    print(formatted_tools) # Output: Formatted JSON string for tools with "location": "London", "unit": "C"
    
  • Partial Formatting: By default, the format_*() methods use partial_format, which means that if a variable in your prompt template is not provided in the formatting arguments, it will be left as is in the output string, instead of raising an error. You can disable partial formatting by setting format_partial=False.
  • Storing Formatted State: If you want to update the Prompt object with the formatted message (e.g., to save the formatted prompt), you can set store_state=True in the format_*() methods. This will modify the prompt.system, prompt.user, prompt.assistant, or prompt.attributes.tools attributes in place.
  • Storing Tool Response: You can store a tool response for a given tool call using the store_tool_response() method:
    prompt.store_tool_response(
        tool_id="call_1ZUCTfyeDnpqiZbIwpF6fLGt",
        response="Temperature in New York is 15°C"
    )
    print(prompt.output_tools[0].content) # Output: Temperature in New York is 15°C
    
    If store_state is True (the default), the response is stored in the current Prompt object. The method matches the tool_id with an existing tool in input_tools or output_tools and updates or creates an OutputPromptTool accordingly.

Converting Prompt to String

You can easily convert a Prompt object back into a formatted string representation using str(prompt). This is useful for logging, saving prompts to files, or passing them to other systems.

prompt_string_output = str(prompt)
print(prompt_string_output)

# Save the prompt to a file:
with open("formatted_prompt.md", "w") as f:
    f.write(str(prompt))

Benefits of Using prompt-parser

  • Organization: Structure your prompts and attributes in a clean and manageable way.
  • Readability: Prompts are easier to read and understand when separated into attributes and message roles.
  • Flexibility: Easily load prompts from strings or files, and format them dynamically.
  • Safety: Use .get() for safe attribute access and .attribute_forced for ensuring required attributes.
  • Maintainability: Makes prompt management and updates easier in your LLM applications.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompt_parser-0.4.0.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prompt_parser-0.4.0-py3-none-any.whl (15.2 kB view details)

Uploaded Python 3

File details

Details for the file prompt_parser-0.4.0.tar.gz.

File metadata

  • Download URL: prompt_parser-0.4.0.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.21

File hashes

Hashes for prompt_parser-0.4.0.tar.gz
Algorithm Hash digest
SHA256 f5f309ffc87ec72f01cbce0f815517ef9eed2f47b636b695addd997c737fb669
MD5 9ef58212e752a15f5fccb6525191e772
BLAKE2b-256 ef1722dfc96759ee097bbce186d42c3abf4aa0a3df1be4b71f71050aa2e80e74

See more details on using hashes here.

File details

Details for the file prompt_parser-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: prompt_parser-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 15.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.21

File hashes

Hashes for prompt_parser-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b684338b888a1a21f12dc6eb9ce57d0178788c4a4e1e0d3920567be1e4f118c
MD5 4a6118c940b413b32a006e1b272659aa
BLAKE2b-256 136d7876be80baff5246e33eef0633b96af7981baa2404a14c5584f557948376

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page