Skip to main content

No project description provided

Project description

promptolution

Promptolution

Promptolution is a library that provides a modular and extensible framework for implementing prompt tuning experiments. It offers a user-friendly interface to assemble the core components for various prompt optimization tasks.

In addition, this repository contains our experiments for the paper "Towards Cost-Effective Prompt Tuning: Evaluating the Effects of Model Size, Model Family and Task Descriptions in EvoPrompt".

This project was developed by Timo Heiß, Moritz Schlager and Tom Zehle.

Installation

Use pip to install our library:

pip install promptolution

Alternatively, clone the repository, run

poetry install

to install the necessary dependencies. You might need to install pipx and poetry first.

Documentation

A comprehensive documentation with API reference is availabe at https://finitearth.github.io/promptolution/.

Usage

Create API Keys for the models you want to use:

  • OpenAI: store token in openaitoken.txt
  • Anthropic: store token in anthropictoken.txt
  • DeepInfra (for Llama): store token in deepinfratoken.txt

Optimization Algorithms to choose from

Name # init population Exploration Costs Convergence Speed Parallelizable Utilizes Failure Cases
EvoPrompt DE 8-12 👍 💲 ⚡⚡
EvoPrompt GA 8-12 👍 💲 ⚡⚡
OPro 0 👎 💲💲

Core Components

  • Task: Encapsulates initial prompts, dataset features, targets, and evaluation methods.
  • Predictor: Implements the prediction logic, interfacing between the Task and LLM components.
  • LLM: Unifies the process of obtaining responses from language models, whether locally hosted or accessed via API.
  • Optimizer: Implements prompt optimization algorithms, utilizing the other components during the optimization process.
  • Exemplar Selectors: Implements algorithms for the search of few shot examples that are added to the prompt.

Key Features

  • Modular and object-oriented design
  • Extensible architecture
  • Easy-to-use interface for assembling experiments
  • Parallelized LLM requests for improved efficiency
  • Integration with langchain for standardized LLM API calls
  • Detailed logging and callback system for optimization analysis

Getting Started

Take a look at our getting started notebook: getting_started.py

Reproduce our Experiments

We provide scripts and configs for all our experiments. Run experiments based on config via:

poetry run python scripts/experiment_runs.py --experiment "configs/<my_experiment>.ini"

where <my_experiment>.ini is a config based on our templates.

This project was developed for seminar "AutoML in the age of large pre-trained models" at LMU Munich.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptolution-1.3.0-py3-none-any.whl (41.3 kB view details)

Uploaded Python 3

File details

Details for the file promptolution-1.3.0-py3-none-any.whl.

File metadata

  • Download URL: promptolution-1.3.0-py3-none-any.whl
  • Upload date:
  • Size: 41.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.11.11 Linux/6.8.0-1021-azure

File hashes

Hashes for promptolution-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b55e3c96f5fed1698bb0e38e6113929cae8bf0b669a24d062e21e0e8b82d147
MD5 7bb79849a113414b72a34c055cb22224
BLAKE2b-256 afce3e23c1f3de4117bf1255351f9627eb2ada7a4a11bb56f74873c587a44d0f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page