useful utilities for prompt engineering
Project description
promptools
useful utilities for prompt engineering
Installation
pip install promptools # or any other dependency manager you like
Note that the validation features use pydantic>=2
as an optional dependencies. You can use pip install promptools[validation]
to install it by the way.
API References
extractors
extract_json
parse JSON from raw LLM response text
Usages
Detect and parse the last JSON block from input string.
def extract_json(text: str, /, fallback: F) -> JSON | F:
It will return fallback
if it fails to detect / parse JSON.
Note that the default value of fallback
is None
.
def extract_json(text: str, /, fallback: F, expect: Type[M]) -> M | F:
You can provide a pydantic.BaseModel
or a TypeAlias
in the expect
parameter and pydantic
will validate it.
Examples
Classification example
Imagine that you are using LLM on a classification task.
from promptools.extractors import extract_json
from typing import TypedDict
class Item(TypedDict):
index: int
label: str
original_text = """
The result is:
```json
[
{"index": 0, "label": "A"},
{"index": 1, "label": "B"}
]
```
"""
print(extract_json(original_text, [], list[Item]))
The output will be:
[{'index': 0, 'label': 'A'}, {'index': 1, 'label': 'B'}]
Streaming JSON example
Imagine that you are trying to parse a malformed JSON:
from promptools.extractors import extract_json
from pydantic import BaseModel
original_text = '{"results": [{"index": 1}, {'
print(extract_json(original_text))
The output will be:
{'results': [{'index': 1}, {}]}
openai
count_token
count number of tokens in prompt
Usages
def count_token(prompt: str | list[str], enc: Encoding | None = None) -> int:
Provide your prompt / a list of prompts, get its token count. The second parameter is the tiktoken.Encoding
instance, will default to get_encoding("cl100k_base")
if not provided. The default tiktoken.Encoding
instance is cached, and will not be re-created every time.
def count_token(prompt: dict | list[dict], enc: Encoding | None = None) -> int:
Note that it can also be a single message / a list of messages. Every message should be a dict in the schema below:
class Message(TypedDict):
role: str
content: str
name: NotRequired[str]
Examples
Plain text
from tiktoken import encoding_for_model
from promptools.openai import count_token
print(count_token("hi", encoding_for_model("gpt-3.5-turbo")))
The output will be:
1
List of plain texts
from promptools.openai import count_token
print(count_token(["hi", "hello"]))
The output will be:
2
Single message
from promptools.openai import count_token
count_token({"role": "user", "content": "hi"})
The output will be:
5
List of messages
from promptools.openai import count_token
count_token([
{"role": "user", "content": "hi"},
{"role": "assistant", "content": "Hello! How can I assist you today?"},
])
The output will be:
21
Integrating with promplate
from promplate.prompt.chat import U, A, S
from promptools.openai import count_token
count_token([
S @ "background" > "You are a helpful assistant.",
U @ "example_user" > "hi",
A @ "example_assistant" > "Hello! How can I assist you today?",
])
The output will be:
40
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file promptools-0.1.3.6.tar.gz
.
File metadata
- Download URL: promptools-0.1.3.6.tar.gz
- Upload date:
- Size: 5.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.4.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 635261e10528b78bd963eef915f3b0c595086d73a608ab1e552f97fb0ca10f31 |
|
MD5 | c71dbfec447988cb00a9dd1e7e5c2e15 |
|
BLAKE2b-256 | 5d200c046b4c124d9c10a664463a23ff14b2d923b9adc4ca8916ef5604bac6e9 |
File details
Details for the file promptools-0.1.3.6-py3-none-any.whl
.
File metadata
- Download URL: promptools-0.1.3.6-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.4.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e15ae61afc96ab035cb393730306b349ccd2bb297ded00ae69d879de67a25773 |
|
MD5 | adb25c38d49bb5d3c3644f1d41e4b83d |
|
BLAKE2b-256 | e9e7fcf6aceb58223e961b368a087306aed9efa884e3a492120bd36587dea46b |