Skip to main content

A lightweight library for Prompt Programming: Agent as Code

Project description

PromptTrail

PromptTrail is a lightweight library to help you build something with LLM.

PromptTrail provide:


A unified interface to various LLMs


A simple and intuituve DSL for "Agent as Code"

And various "Developer Tools" to help you build LLM applications.

Qucikstart

Installation

pip install prompttrail

or

git clone https://github.com/combinatrix-ai/PromptTrail.git
cd PromptTrail
pip install -e .

What PromptTrail can do?

Examples

You can find more examples in examples directory.

LLM API Call

This is the simplest example of how to use PromptTrail as a thin wrapper around LLMs of various providers.

> import os
> from src.prompttrail.core import Session, Message
> from src.prompttrail.models.openai import OpenAIChatCompletionModel, OpenAIModelConfiguration, OpenAIModelParameters
> 
> api_key = os.environ["OPENAI_API_KEY"]
> config = OpenAIModelConfiguration(api_key=api_key)
> parameters = OpenAIModelParameters(model_name="gpt-3.5-turbo", max_tokens=100, temperature=0)
> model = OpenAIChatCompletionModel(configuration=config)
> session = Session(
>   messages=[
>     Message(content="Hey", sender="user"),
>   ]
> )
> message = model.send(parameters=parameters, session=session)

Message(content="Hello! How can I assist you today?", sender="assistant")

If you want streaming output, you can use the send_async method if the provider offers the feature.

> message_generator = model.send_async(parameters=parameters, session=session)
> for message in message_generator:
>     print(message.content, sep="", flush=True)

Hello! How can # text is incrementally typed

Developer Tools

We provide various tools for developers to build LLM applications. For example, you can mock LLMs for testing.

> # Change model class to mock model class
> model = OpenAIChatCompletionModelMock(configuration=config)
> # and just call the setup method to set up the mock provider
> model.setup(
>     mock_provider=OneTurnConversationMockProvider(
>         conversation_table={
>             "1+1": "1215973652716",
>         },
>         sender="assistant",
>     )
> )
> session = Session(
>     messages=[
>         Message(content="1+1", sender="user"),
>     ]
> )
> message = model.send(parameters=parameters, session=session)
> print(message)

TextMessage(content="1215973652716", sender="assistant")

Agent as Code

You can write a simple agent like below. Without reading the documentation, you can understand what this agent does!

template = LinearTemplate(
    [
        MessageTemplate(
            role="system",
            content="You're a math teacher bot.",
        ),
        LoopTemplate(
            [
                UserInputTextTemplate(
                    role="user",
                    description="Let's ask a question to AI:",
                    default="Why can't you divide a number by zero?",
                ),
                GenerateTemplate(
                    role="assistant",
                ),
                MessageTemplate(role="assistant", content="Are you satisfied?"),
                UserInputTextTemplate(
                    role="user",
                    description="Input:",
                    default="Yes.",
                ),
                # Let the LLM decide whether to end the conversation or not
                MessageTemplate(
                    role="assistant",
                    content="The user has stated their feedback."
                    + "If you think the user is satisfied, you must answer `END`. Otherwise, you must answer `RETRY`."
                ),
                check_end := GenerateTemplate(
                    role="assistant",
                ),
            ],
            exit_condition=BooleanHook(
                condition=lambda state: ("END" == state.get_last_message().content.strip())
            ),
        ),
    ],
)

runner = CommandLineRunner(
    model=OpenAIChatCompletionModel(
        configuration=OpenAIModelConfiguration(
            api_key=os.environ.get("OPENAI_API_KEY", "")
        )
    ),
    parameters=OpenAIModelParameters(model_name="gpt-4"),
    template=template,
    user_interaction_provider=UserInteractionTextCLIProvider(),
)

runner.run()

You can talk with the agent on your console like below:

===== Start =====
From: 📝 system
message:  You're a math teacher bot.
=================
Let's ask a question to AI:
From: 👤 user
message:  Why can't you divide a number by zero?
=================
From: 🤖 assistant
message:  Dividing a number by zero is undefined in mathematics. Here's why:

Let's say we have a division operation a/b. This operation asks the question: "how many times does b fit into a?" If b is zero, the question becomes "how many times does zero fit into a?", and the answer is undefined because zero can fit into a an infinite number of times.

Moreover, if we look at the operation from the perspective of multiplication (since division is the inverse of multiplication), a/b=c means that b*c=a. If b is zero, there's no possible value for c that would satisfy the equation, because zero times any number is always zero, not a.

So, due to these reasons, division by zero is undefined in mathematics.
=================
From: 🤖 assistant
message:  Are you satisfied?
=================
Input:
From: 👤 user
message:  Yes.
=================
From: 🤖 assistant
message:  The user has stated their feedback.If you think the user is satisfied, you must answer `END`. Otherwise, you must answer `RETRY`.
=================
From: 🤖 assistant
message:  END
=================
====== End ======

Go to examples directory for more examples.

Tooling

You can use function calling! In function calling, you need to give LLM instructions to use the tool. Then, LLM give you the tool arguments and you need to give the result back to LLM. Therefore, you need:

  • giving explanation by the way LLM can understand
  • handling of multiple turn conversations
  • validation of tool arguments given by LLM
  • executing the function and return the result to LLM

PromptTrail handles all of these for you. You can define your own Tools to call and use them in your templates. Inherit Tool, ToolArgument, ToolResult and add type annotations. PromptTrail will automatically generate descriptions for LLM and let the LLM use the tool. Execution and validation is also handled by PromptTrail. Let's see a simple weather forecast tool as example:

class Place(ToolArgument):
    description: str = "The location to get the weather forecast"
    value: str

class TemperatureUnitEnum(enum.Enum):
    Celsius = "Celsius"
    Fahrenheit = "Fahrenheit"

class TemperatureUnit(ToolArgument):
    description: str = "The unit of temperature"
    value: Optional[TemperatureUnitEnum]

class WeatherForecastResult(ToolResult):
    temperature: int
    weather: str

    def show(self) -> Dict[str, Any]:
        return {"temperature": self.temperature, "weather": self.weather}

class WeatherForecastTool(Tool):
    name = "get_weather_forecast"
    description = "Get the current weather in a given location and date"
    argument_types = [Place, TemperatureUnit]
    result_type = WeatherForecastResult

    def _call(self, args: Sequence[ToolArgument], state: State) -> ToolResult:
        # Implement real API call here
        return WeatherForecastResult(temperature=0, weather="sunny")

template = LinearTemplate(
    templates=[
        MessageTemplate(
            role="system",
            content="You're an AI weather forecast assistant that help your users to find the weather forecast.",
        ),
        MessageTemplate(
            role="user",
            content="What's the weather in Tokyo tomorrow?",
        ),
        OpenAIGenerateWithFunctionCallingTemplate(
            role="assistant",
            functions=[WeatherForecastTool()],
        ),
    ]
)

The conversation will be like below:

===== Start =====
From: 📝 system
message:  You're an AI weather forecast assistant that help your users to find the weather forecast.
=================
From: 👤 user
message:  What's the weather in Tokyo tomorrow?
=================
From: 🤖 assistant
data:  {'function_call': {'name': 'get_weather_forecast', 'arguments': {'place': 'Tokyo', 'temperatureunit': 'Celsius'}}}
=================
From: 🧮 function
message:  {"temperature": 0, "weather": "sunny"}
=================
From: 🤖 assistant
message:  The weather in Tokyo tomorrow is expected to be sunny with a temperature of 0 degrees Celsius.
=================
====== End ======

See documentation) for more information.

Next

  • Provide a way to export / import sessions
  • Better error messages that help debugging
  • More default tools
    • Vector Search Integration
    • Code Execution
  • toml input/output for templates
  • repository for templates
  • job queue and server
  • asynchronous execution (more complex runner)
  • Local LLMs

File an issue if you have any requests!

License

  • PromptTrail is licensed under the MIT License.

Contributing

  • Contributions are more than welcome!
  • See CONTRIBUTING for more details.

Q&A

Why bother yet another LLM library?

  • PromptTrail is designed to be lightweight and easy to use.
  • Manipulating LLM is actually not that complicated, but LLM libraries are getting more and more complex to embrace more features.
  • PromptTrail aims to provide a simple interface for LLMs and let developers implement their own features.

Showcase

  • If you build something with PromptTrail, please share it with us via Issues or Discussions!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompttrail-0.1.0.tar.gz (57.5 kB view details)

Uploaded Source

Built Distribution

prompttrail-0.1.0-py3-none-any.whl (35.9 kB view details)

Uploaded Python 3

File details

Details for the file prompttrail-0.1.0.tar.gz.

File metadata

  • Download URL: prompttrail-0.1.0.tar.gz
  • Upload date:
  • Size: 57.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.2

File hashes

Hashes for prompttrail-0.1.0.tar.gz
Algorithm Hash digest
SHA256 712abd3cc48ff090b5232dc0b6b2adcb9fd82501f86df12e945bdbbd4da8306b
MD5 8ee12728345bdd65f36f2dfda6e3c8ac
BLAKE2b-256 47f276853ec6ac3264397c7c05d3c2bd2188f4d52e9ae41aed8a447af9f05e29

See more details on using hashes here.

File details

Details for the file prompttrail-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: prompttrail-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 35.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.2

File hashes

Hashes for prompttrail-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 085eca2a9a97713e98bf08c475ee6a726ae5d001c71e8b58c6f66a97e6caeb0b
MD5 7e376ba9d3d73af40019cd4401af489c
BLAKE2b-256 5245598e7047599f61d632301a3504a0cd0d0c34062e36738ca79b0f237a6a8d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page