Skip to main content

A Prompt Programming Language

Project description

🍎APPL: A Prompt Programming Language

python pre-commit Ruff Code style: black Checked with mypy License: MIT

APPL is A Prompt Programming Language that extends Python to provide a Natural, Intuitive, Convenient, and Efficient (NICE) way to utilize Large Language Models (LLMs) such as GPT in your program.

APPL

Key Features

  • Readability and maintainability via seamless integration with Python. APPL seamlessly embeds natural language prompts into Python programs, maintaining prompts' readability while inheriting modularity, reusability, dynamism and the ecosystem from the host programming language.
  • Flexible prompt engineering. Except for allowing the utilization of Python control flows and the modularized decomposition of prompts, APPL offers prompt coding helpers to facilitate programming prompts in a modularized and maintainable way.
  • Automatic parallelization via asynchronous computation. APPL schedules LLM calls asynchronously, leveraging potential independence among them to facilitate efficient parallelization. This offloads the burden of users to manage synchronization manually, with almost no extra work.
  • Smooth tool calling integration. APPL provides intuitive ways to transform Python functions into tools that can be called by LLMs, making it easy for users to integrate existing Python libraries and functions with LLMs.
  • Tracing and Failure Recovery. APPL traces the execution of LLM calls and supports recovery from failures, which is essential for debugging and error handling in the LLM programming paradigm.
  • More Features. APPL also provides a unified interface for multiple LLM backends using litellm, structured generations using instructor, and many other features.

Quick Start

Installation

You can simply install APPL from PyPI using pip:

pip install -U applang

More installation options can be found in the installation guide.

Setup

You need to set up API keys or your own LLM backends to interact with LLMs.

In this guide, we use OpenAI API as the default backend. You can set your OpenAI API key in the .env file in the root directory of your project:

OPENAI_API_KEY=<your openai api key>

or export it as an environment variable:

export OPENAI_API_KEY=<your openai api key>

For setting up other backends, enabling tracing and recovering from traces, please refer to the setup guide.

Hello World

To begin, let's create a simple function that uses LLM to respond to a greeting.

import appl
from appl import gen, ppl

appl.init()  # initialize APPL

@ppl  # the @ppl decorator marks the function as an `APPL function`
def greeting(name: str):
    f"Hello World! My name is {name}."  # Add text to the prompt
    return gen()  # call the default LLM with the current prompt

print(greeting("APPL"))  # call `greeting` as a normal Python function

The prompt for the generation is:

Hello World! My name is APPL.

The output will look like

Nice to meet you, APPL!

In this example, the @ppl decorator (@ stands for a here) marks the hello_world function as an APPL function. Within such a function, the standalone string f"Hello World! My name is {name}." is added to the prompt, and the gen() function calls LLM to generate responses using the current prompt.

Question Answering

Let's then implement a question-answering system using APPL. In this example, the APPL program answers multiple questions about a quotation by first extracting the author's name (inspired by this cookbook). Here is a runnable Colab notebook of this example.

import appl
from appl import AIRole, gen, ppl
from appl.const import NEWLINE

appl.init()

@ppl(ctx="copy")  # copy the context from caller
def get_answer(question: str):
    question  # append to the prompt
    return gen()  # return as a future object

@ppl  # marks APPL function
def answer_questions(quotation: str, questions: list[str]):
    "Extract the name of the author from the quotation below and answer questions."
    quotation  # append to the prompt
    with AIRole():  # assistant message
        f"The name of the author is {gen(stop=NEWLINE)}"  # specify the prefix
    return [get_answer(q) for q in questions]  # parallelize calls

quotation = '"Simplicity is the ultimate sophistication." -- Leonardo da Vinci'
questions = [
    "In what era did the author live?",
    # more questions can be added here
]
for ans in answer_questions(quotation, questions):
    print(ans)

The resulting conversation for the first question would look like (generated responses are in bold):

Role Message
User Extract the name of the author from the quotation below and answer questions.
"Simplicity is the ultimate sophistication." -- Leonardo da Vinci
Assistant The name of the author is Leonardo da Vinci.
User In what era did the author live?
Assistant Leonardo da Vinci lived during the Renaissance era.

In APPL functions, expression statements are captured as prompts based on the type of its value. Notably, the f-string is processed part by part, so the gen function inside the f-string intuitively uses the contents before that. In this example, The name of the author is serves as a prefix to guide the completion of the author's name.

After the author's name is extracted, the get_answer function is called multiple times in parallel to answer the questions, with the context being copied (detailed in context-management), demonstrating the automatic parallelization feature of APPL.

Usage by Examples

We provide a series of examples to demonstrate the usage of APPL. Some examples in this section are simplified for demonstration purposes. See runnable examples in the examples directory.

Context Management

Each APPL function has a context, which contains the prompts captured in the function. There are four different ways to pass the context when calling another APPL function (the callee) in an APPL function (the caller): new, copy, same, and resume.

Context Management

  1. new: The default behavior, create a new empty context.
  2. copy: This is similar to call by value in programming languages. The callee's context is a copy of the caller's context, therefore the changes in the callee's context won't affect the caller's context.
  3. same: This is similar to call by reference in programming languages. The callee's context is the same as the caller's context, therefore the changes in the callee's context will affect the caller's context.
  4. resume: This resumes the context of the function each time it is called, i.e., the context is preserved across calls, making the function stateful. It copies the caller's context for the first call as the initial context. It is useful when you want to continue the conversation from the last call.

In the following example, we illustrate the usage of the first three context management methods and ways to decompose long prompts into smaller pieces using APPL functions.

import appl
from appl import convo, gen, ppl, records

appl.init()

@ppl  # use empty context
def intro():
    f"Today is 2024/02/29."
    return records()

@ppl(ctx="same")  # same as the caller's context
def addon():
    f"Dates should be in the format of YYYY/MM/DD."
    # The newly captured prompt will influence the caller's context

@ppl(ctx="copy")  # copy the caller's context
def query(question: str):
    # the prompts here will not influence the caller's context
    f"Q: {question}"
    f"A: "
    # print(convo())  # display the conversation used for the generation
    return gen()

@ppl
def ask_questions(questions: list[str]):
    # long prompt can be decomposed into several smaller `appl functions`
    # method 1 (recommended): build the sub-prompts in an empty context
    intro()  # returns prompt records, will be captured in this context
    # method 2: use the same context and modify it in the function
    addon()  # returns None, not captured, but the context is modified inside
    return [query(q) for q in questions]
 
questions = [
    "What's the date tomorrow?",
    "What's the date yesterday?",
    "How many dates passed since 2024/02/02?",
]
for res in ask_questions(questions):
    print(res)

Three queries are independent and run in parallel, where the prompts and possible responses of the generations are shown below:

First query Second query Third query
Today is 2024/02/29.
Dates should be in the format of YYYY/MM/DD.
Q: What's the date tomorrow? Q: What's the date yesterday? Q: How many dates passed since 2024/02/02?
A: 2024/03/01 A: 2024/02/28 A: 27 dates have passed since 2024/02/02.

Note the records() function retrieves the prompt records captured in the current function, and the convo() function retrieves the full conversation in the context. They are analogous to the locals() and globals() functions in Python.

Concurrent Execution

The parallelization of multiple queries in the last example is achieved using asynchronous computation. In APPL, the gen function automatically starts a new thread (or process) and does not block the main thread. The generation result is not synchronized (waited) until its value is needed, making the execution of multiple independent LLM calls easily concurrent.

Many prompt engineering techniques like Self-Consistency (CoT-SC) and Tree of Thoughts (ToT) involve non-sequential LLM calls such as branching and gathering. The following example demonstrates how to use APPL to naturally exploit the independence among the reasoning paths in CoT-SC to parallelize the execution.

def get_mode(answers: list[str]):
    """Get the mode of the answers"""
    return max(set(answers), key=answers.count)

def marginalize(results: list):
    """Get the answer from the results and get the mode of the answers"""
    
    # explicitly syncronize the results using str()
    answers = [parse_answer(str(res)) for res in results]
    # detailed implementation of `parse_answer` are omitted here

    return get_mode(answers)

@ppl
def cot_consistency(cot_examples: list[str], question: str, num_trials: int):
    cot_examples # the list of examples are captured into prompt one-by-one
    question
    results = [gen() for _ in range(num_trials)] # concurrent generation
    return marginalize(results) # marginalize the reasoning paths to get the answer

LLM Tool Calling

Integrating tools significantly enhances the capabilities of LLMs. APPL introduces a seamless method to transform Python functions into tools accessible by LLMs (provided the backend LLM supports tool calls). When the gen function is provided with Python functions, APPL automatically transforms them into tools by extracting information from the signature and docstring of the functions. Such integration facilitates leveraging existing Python libraries and functions directly within LLMs.

Consider the example below, where we transform a Python function, is_lucky, into a callable tool. During execution, gpt-3.5-turbo smartly invokes the is_lucky tool with the appropriate arguments. Subsequently, the function executes with these arguments and the result is returned.

import sympy

import appl
from appl import Generation, as_tool, gen, ppl, records

appl.init()

def is_lucky(x: int) -> bool:
    """Determine whether the input number is a lucky number.

    Args:
        x (int): The input number to be checked.

    Returns:
        bool: True if the number is a lucky number, False otherwise.
    """
    return sympy.isprime(x + 3)

@ppl
def func(x):
    f"Is {x} a lucky number?"

    # Initiate the generation with tool `unique_number``,
    # which is built into a tool by automatically extracting
    # information from the function signature and docstring.
    # And then store the tool call messages into the prompt
    (actions := gen(tools=[is_lucky]))

    # Execute the tool calls and retrieve the results.
    results = actions.run_tool_calls()  # results is a list of ToolMessage

    # Return the first result from the tool execution.
    return results[0].get_content()

Prompt Coding Helpers

We provide two types of helpers, Compositor and Definition, to facilitate coding prompts in a modularized and maintainable way. These helpers were originally designed in PromptCoder and have been used to develop prompts in the ToolEmu project with more than 20k tokens in total. By leveraging Python's idiomatic features, we have enhanced the usability and flexibility of these helpers.

The Compositor organizes the prompts within its context into a structured prompt. For example, the NumberedList would compose a list of text into a numbered list:

with NumberedList():
    f"First item"
    f"Second item"
>>> composed into >>>
1. First item
2. Second item

You can also nest the compositors to create more complex structures.

The Definition class provides a standardized way to define concepts and refer to them in the prompts. Once a concept is defined by subclassing Definition, you can refer to it in the prompts by using the class name. Meanwhile, you need to include the concept's description somewhere in the prompt by instantiating the class with the description as an argument. This design ensures the consistency of the concept's definition and usage in the prompts.

Please see the example for more details.

Cookbook

For more detailed usage and examples, please refer to the cookbook.

Citation and Acknowledgment

If you find APPL helpful, please consider citing our paper:

<to be added>

We would like to thank the open-source community for their contributions, where we learned from or used these libraries in our project, including instructor, LiteLLM, LMQL, Guidance, SGLang and autogen.

License

This project is licensed under the terms of the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

applang-0.0.1a5.tar.gz (67.2 kB view details)

Uploaded Source

Built Distribution

applang-0.0.1a5-py3-none-any.whl (68.1 kB view details)

Uploaded Python 3

File details

Details for the file applang-0.0.1a5.tar.gz.

File metadata

  • Download URL: applang-0.0.1a5.tar.gz
  • Upload date:
  • Size: 67.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.15.2 CPython/3.11.0 Linux/4.15.0-213-generic

File hashes

Hashes for applang-0.0.1a5.tar.gz
Algorithm Hash digest
SHA256 2b0ca4342bb81a6f807cae99b83c311b89f1fc4323db71dbcc2984af3a048516
MD5 8d5e0f1beb7ba394600015799c0d17d8
BLAKE2b-256 a0aa437dfc8ec3d84e751de6ac28ca4055af2b0732928d7783fba728f7a6ac4e

See more details on using hashes here.

File details

Details for the file applang-0.0.1a5-py3-none-any.whl.

File metadata

  • Download URL: applang-0.0.1a5-py3-none-any.whl
  • Upload date:
  • Size: 68.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: pdm/2.15.2 CPython/3.11.0 Linux/4.15.0-213-generic

File hashes

Hashes for applang-0.0.1a5-py3-none-any.whl
Algorithm Hash digest
SHA256 ab669c11025f2ef4b3b56f30707cb0aa82f77d267a3e5ea97c48519bbf958f65
MD5 ce3a19444a1171705f983a518a30d0ed
BLAKE2b-256 ea781b210ac99a574169c86dc6410669c49f4cd31d179916be7b488adcb0dd20

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page