Skip to main content

Streamlining the process of multi-prompting LLMs with chains

Project description

flowchat

A Python library for building clean and efficient multi-step prompt chains. It is built on top of OpenAI's Python API.

Flowchat is designed around the idea of a chain. Each chain can start with a system prompt .anchor(), and then add chain links of messages with .link().

Once a chain has been built, a response from the LLM can be pulled with .pull().

You can optionally log the chain's messages and responses with .log(). This is useful for debugging and understanding the chain's behavior. Remember to call .log() before .unhook() though! Unhooking resets the current chat conversation of the chain.

However, the thing that makes flowchat stand out is the idea of chaining together responses, one chain after another. The chain's previous response can be accessed in the next chain with a lambda function in the next .link(). This allows for a more natural conversation flow, and allows for more complex thought processes to be built. You can also use the json_schema argument in .pull() to define specific json schema response, and extract data with more control.

Check out these example chains to get started!

Installation

pip install flowchat

Example Usage

chain = (
    Chain(model="gpt-3.5-turbo")
    .anchor("You are a historian.")
    .link("What is the capital of France?")
    .pull().log().unhook()

    .link(lambda desc: f"Extract the city in this statement (one word):\n{desc}")
    .pull().log().unhook()

    .anchor("You are an expert storyteller.")
    .link(lambda city: f"Design a basic three-act point-form short story about {city}.")
    .pull(max_tokens=512).log().unhook()

    .anchor("You are a novelist. Your job is to write a novel about a story that you have heard.")
    .link(lambda storyline: f"Briefly elaborate on the first act of the storyline: {storyline}")
    .pull(max_tokens=256, model="gpt-4-1106-preview").log().unhook()

    .link(lambda act: f"Summarize this act in around three words:\n{act}")
    .pull(model="gpt-4")
    .log_tokens()
)

print(f"Result: {chain.last()}") # >> "Artist's Dream Ignites"

Natural Language CLI:

from flowchat import Chain, autodedent
import os
import subprocess


def execute_system_command(command):
    try:
        result = subprocess.run(
            command, shell=True, check=True,
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
        )
        return result.stdout
    except subprocess.CalledProcessError as e:
        return e.stderr


def main():
    print("Welcome to the Natural Language Command Line Interface!")
    os_system_context = f"You are a shell interpreter assistant running on {os.name} operating system."

    while True:
        user_input = input("Please enter your command in natural language: ")

        should_exit = (
            Chain()
            .link(autodedent(
                "Does the user want to exit the CLI? Respond with 'YES' or 'NO'.",
                user_input
            )).pull(max_tokens=2).unhook().last()
        )

        if should_exit.lower() in ("yes", "y"):
            print("Exiting the CLI.")
            break

        # Feed the input to flowchat
        command_suggestion_json = (
            Chain(model="gpt-4-1106-preview")
            .anchor(os_system_context)
            .link(autodedent(
                "The user wants to do this: ",
                user_input,
                "Suggest a command that can achieve this in one line without user input or interaction."
            )).pull().unhook()

            .anchor(os_system_context)
            .link(lambda suggestion: autodedent(
                "Extract ONLY the command from this command desciption:",
                suggestion
            ))
            .pull(
                json_schema={"command": "echo 'Hello World!'"},
                response_format={"type": "json_object"}
            ).unhook().last()
        )

        command_suggestion = command_suggestion_json["command"]
        print(f"Suggested command: {command_suggestion}")

        # Execute the suggested command and get the result
        command_output = execute_system_command(command_suggestion)
        print(f"Command executed. Output:\n{command_output}")

        if command_output != "":
            description = (
                Chain().anchor(os_system_context)
                .link(f"Describe this output:\n{command_output}")
                .pull().unhook().last()
            )
            # Logging the description
            print(f"Explanation:\n{description}")

        print("=" * 60)


if __name__ == "__main__":
    main()

This project is under a MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flowchat-0.2.0.tar.gz (7.0 kB view details)

Uploaded Source

File details

Details for the file flowchat-0.2.0.tar.gz.

File metadata

  • Download URL: flowchat-0.2.0.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for flowchat-0.2.0.tar.gz
Algorithm Hash digest
SHA256 972be4059742791c22ce493e20aa865419935fc3f0a3c15a56216ab84082ec13
MD5 497f1cf4799e5a79002af475c3b6fcd7
BLAKE2b-256 26feced92e149bf95160035530c1f9d5678f43d4ec52f5b7636c0a7db507c300

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page