Skip to main content

Validation for LLM prompts.

Project description

PyPrompt

LLM Prompt Validation and Manipulation

This project provides functionality for creating prompts for LLMs, including their validation and manipulation. This includes functionality for limiting the number of tokens used for LLM prompts and distributing these tokens among the parts of the LLM message.

Note that the tokenizer used here may not be exactly the same as the one used by the LLM, so the actual number of tokens used may be greater than or less than this module estimates. If using more tokens the the context window is problematic, you can add a margin to this module's estimate by simply passing in a smaller number of tokens than the context window.

Dependencies

The required packages can be installed by running poetry install. This depends on poetry being installed locally; instructions can be found here.

Usage

For direction on how to use this module, please see function example in pyprompt/prompt_validation.py and read the appropriate docstrings as necessary.

Tests

The test suite can be run using poetry run pytest.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyprompt-0.1.0.tar.gz (9.3 kB view hashes)

Uploaded Source

Built Distribution

pyprompt-0.1.0-py3-none-any.whl (15.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page