Skip to main content

A Python package for standardizing prompts for LLMs.

Project description

🐧 ペンペン (PenPen)

Developer guide

Install

To install PenPen from pypi use

pip install PenPen-AI

To install PenPen from a local copy in editable mode (to perform changes) use

pip install -e {path_to_local_copy}

Deploy

steps to deploy on pypi:

// install setuptools wheel and twine
pip install setuptools wheel twine 

// package the project
python setup.py sdist bdist_wheel

// deploy (make sure to have credentials in ~/.pypirc or to pass as argument)
twine upload dist/*

CLI User Guide

Install

pip install PenPen-AI

Run

// run a prompt
prompt-runner run -p {path_to_prompt} -t {path_to_task}

// get usage
prompt-runner --help

Prompt folder structure and some details

PromptRunner is a cli that allows you to run prompt for testing purposes. It accepts the following arguments:

-p --prompt: the folder of the prompt to run -t --task: the task to run for the given prompt, task specific files must be in the prompt directory -o --output-dir: the output directory where to write the response, the response is otherwise saved in the output folder of the executed task

Folder structure:

{prompt_folder}/
  - openai.json # contains the openai client configuration
  - persona.md # contains the persona prompt
  - task_template.md # contains the task template prompt
  - functions.py # (optional) contains the functions to be used for this prompt
  - {task}/ # folder of a task
    - facts.json (optional) # array of fact items, contains the facts specific to this task
    - facts_filter.json (optional) # array of fact tag ids to be filtered
    - task_parameter_1.md (optional) # contains the task parameters  to be populated in the template
    ...
    - task_paramteer_n.md (optional) # n-th task parameter

Info on the files

openai.json

Contains the openai arguments used for this specific call, all fields are optional except the model field:

{
    "model": "gpt-3.5-turbo-0613" // one of "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0613", "gpt-4-0613"
    "max_tokens": 1000 // max tokens for the response
    "stream": true // stream the response or not
    "temperature": 0.3 // temperature to be used
    "top_p": null // omit it when using temperature
    "n": 1 // right now only 1 is supported, so can be omitted
    "max_retries_after_openai_error": 5 // how many times to retry after an openai error before failing
    "retry_delay_seconds": 15 // how many seconds to wait before retrying
}
persona.md

A markdown template that holds the persona prompt

task_template.md

A markdown template that holds the task prompt, to add task_parameters use the template syntax {task_parameter_name} (check examples for more details).

functions.py

A python file containing functions to be used to parse the prompt, check examples for more details, but it is important that the variables: function_wrappers, function_call, max_consecutive_function_calls are defined.

{task}/facts.json and {task}/facts_filter.json

{task}/facts.json is a json file containing an array of facts to be used for this task, each fact is an object with the following fields:

{
    "tag": "fact_id", // a unique id for the fact
    "content": "fact content", // the fact content
}

{task}/facts_filter.json is a json file containing an array of fact ids to be filtered, if this file is not present all facts will be used, otherwise only the facts with the ids in the filter will be used.

{task}/{task_parameter_name}.md

If there are task parameters in the task template, there must be a file for each task parameter, the file name must be the same as the task parameter name, the file must be a markdown file and it must contain the task parameter content.

Chain

With the chain command it is possible to chain two prompt run execution, the output of the n-th prompt run will be appended to the task_template of the n+1-th prompt run.

prompt-runner chain -p {prompt_path1},{task_name1} {prompt_path2},{task_name2} ... {prompt_pathn},{task_n} -o {output_path}

The output for each chain execution will be stored in the folder chain_{timestamp} of the working directory, unless an output directory is specified with the -o argument.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

PenPen-AI-0.0.3.tar.gz (12.9 kB view details)

Uploaded Source

Built Distribution

PenPen_AI-0.0.3-py3-none-any.whl (15.1 kB view details)

Uploaded Python 3

File details

Details for the file PenPen-AI-0.0.3.tar.gz.

File metadata

  • Download URL: PenPen-AI-0.0.3.tar.gz
  • Upload date:
  • Size: 12.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.6

File hashes

Hashes for PenPen-AI-0.0.3.tar.gz
Algorithm Hash digest
SHA256 b1a9dbc5b0b602c8b29edc5812afd05acbd3b63bfabe7547c28a7e6d554aad21
MD5 76b89ca908452566de1e55c6d5dc87c1
BLAKE2b-256 32767e3fcc57771645bd7cf17a6106dc97f3f95caf1cc5a55d7372e60fc9b5f1

See more details on using hashes here.

File details

Details for the file PenPen_AI-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: PenPen_AI-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 15.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.6

File hashes

Hashes for PenPen_AI-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d91654ef68e03a72f91aeda4ec9997a74f67e1e5bd56c3a48a50d2026944eec3
MD5 ac6963bdffeab1102ee47de3b3766519
BLAKE2b-256 e5169a10e53b6729e88036516a4d9c46e544acab3c29d4cd18f5196769f16827

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page