Skip to main content

A framework for optimizing prompts through multi-task evaluation and iterative improvement

Project description

Promptim

Experimental prompt optimization library.

Example:

Clone the repo, then setup:

uv venv
source .venv/bin/activate
uv pip install -e .
python examples/tweet_writer/create_dataset.py

Then run prompt optimization.

promptim --task examples/tweet_writer/config.json --version 1

Create a custom task

Currently, promptim runs over individual tasks. A task defines the dataset (with train/dev/test splits), initial prompt, evaluators, and other information needed to optimize your prompt.

    name: str  # The name of the task
    description: str = ""  # A description of the task (optional)
    evaluator_descriptions: dict = field(default_factory=dict)  # Descriptions of the evaluation metrics
    dataset: str  # The name of the dataset to use for the task
    initial_prompt: PromptConfig  # The initial prompt configuration.
    evaluators: list[Callable[[Run, Example], dict]]  # List of evaluation functions
    system: Optional[SystemType] = None  # Optional custom function with signature (current_prompt: ChatPromptTemplate, inputs: dict) -> outputs

Let's walk through the example "tweet writer" task to see what's expected. First, view the config.json file

{
  "optimizer": {
    "model": {
      "model": "claude-3-5-sonnet-20241022",
      "max_tokens_to_sample": 8192
    }
  },
  "task": "examples/tweet_writer/task.py:tweet_task"
}

The first part contains confgiuration for the optimizer process. For now, this is a simple configuration for the default (and only) metaprmopt optimizer. You can control which LLM is used via the model configuration.

The second part is the path to the task file itself. We will review this below.

def multiple_lines(run, example):
    """Evaluate if the tweet contains multiple lines."""
    result = run.outputs.get("tweet", "")
    score = int("\n" in result)
    comment = "Pass" if score == 1 else "Fail"
    return {
        "key": "multiline",
        "score": score,
        "comment": comment,
    }


tweet_task = dict(
    name="Tweet Generator",
    dataset="tweet-optim",
    initial_prompt={
        "identifier": "tweet-generator-example:c39837bd",
    },
    # See the starting prompt here:
    # https://smith.langchain.com/hub/langchain-ai/tweet-generator-example/c39837bd
    evaluators=[multiple_lines],
    evaluator_descriptions={
        "under_180_chars": "Checks if the tweet is under 180 characters. 1 if true, 0 if false.",
        "no_hashtags": "Checks if the tweet contains no hashtags. 1 if true, 0 if false.",
        "multiline": "Fails if the tweet is not multiple lines. 1 if true, 0 if false. 0 is bad.",
    },
)

We've defined a simple evaluator to check that the output spans multiple lines.

We have also selected an initial prompt to optimize. You can check this out in the hub.

By modifying the above values, you can configure your own task.

CLI Arguments

The CLI is experimental.

Usage: promptim [OPTIONS]

  Optimize prompts for different tasks.

Options:
  --version [1]                [required]
  --task TEXT                  Task to optimize. You can pick one off the
                               shelf or select a path to a config file.
                               Example: 'examples/tweet_writer/config.json
  --batch-size INTEGER         Batch size for optimization
  --train-size INTEGER         Training size for optimization
  --epochs INTEGER             Number of epochs for optimization
  --debug                      Enable debug mode
  --use-annotation-queue TEXT  The name of the annotation queue to use. Note:
                               we will delete the queue whenever you resume
                               training (on every batch).
  --no-commit                  Do not commit the optimized prompt to the hub
  --help                       Show this message and exit.

We have created a few off-the-shelf tasks:

  • tweet: write tweets
  • simpleqa: really hard Q&A
  • scone: NLI

run

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptim-0.0.2rc5.tar.gz (2.9 MB view details)

Uploaded Source

Built Distribution

promptim-0.0.2rc5-py3-none-any.whl (25.3 kB view details)

Uploaded Python 3

File details

Details for the file promptim-0.0.2rc5.tar.gz.

File metadata

  • Download URL: promptim-0.0.2rc5.tar.gz
  • Upload date:
  • Size: 2.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.4.29

File hashes

Hashes for promptim-0.0.2rc5.tar.gz
Algorithm Hash digest
SHA256 b0f4ea24d67e05c34ee94af40a465993089614f28c31bcebf079cd307445a3af
MD5 9b3ca590f7375d79432148369240fb28
BLAKE2b-256 f2d39b21da38c174675ece22f26023e051166205a73a871c293d76ce494def5e

See more details on using hashes here.

File details

Details for the file promptim-0.0.2rc5-py3-none-any.whl.

File metadata

File hashes

Hashes for promptim-0.0.2rc5-py3-none-any.whl
Algorithm Hash digest
SHA256 17f1e532621dd5f3b9d5aaf581f6a22ccd1d7601ff1510b90bcdff36389b228a
MD5 fdaba95342ddc7a06196bde6007f9bb1
BLAKE2b-256 ab314e2ec3bdb9ae699efd6f06e5cb2b973ccf326794eac3e6096461500dc86a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page