🙏🏼 Politely injects PLEASE into your LLM prompts. Because manners matter (kind of).
Project description
pretty-please 🙏🏼
Politely injects PLEASE into your LLM prompts. Because manners matter (kind of).
pretty-please is a tiny, zero-dependency Python library that intercepts your prompts before they reach an LLM and gives them a light politeness pass. It works as a drop-in wrapper around the Anthropic and OpenAI SDKs, as a LiteLLM callback, and as a Claude Code UserPromptSubmit hook.
from pretty_please.core import transform
transform("List the planets.")
# → "Please, list the planets."
transform("I need help improving this function.")
# → "I need help improving this function, please."
transform("Please could you explain this?")
# → "Please could you explain this?" (already polite, untouched)
Does this actually work?
Probably not in the way you'd hope.
Yin et al. (2024) — "Should We Respect LLMs?" (arXiv:2402.14531) — found that rudeness reliably hurts LLM performance. But the flip side isn't symmetrical: adding please showed no consistent improvement on frontier models like GPT-4. The paper's main takeaway is really "avoid being rude," not "be polite and get better answers."
pretty-please is defensive prompt hygiene, not a performance booster. It won't make Claude smarter. It just stops you from accidentally being rude to a language model, which, per the research, is a real and avoidable own-goal.
Installation
pip install pretty-please-llm
With SDK extras:
pip install "pretty-please-llm[anthropic]"
pip install "pretty-please-llm[openai]"
pip install "pretty-please-llm[litellm]"
Usage
Core (no dependencies)
from pretty_please.core import detect_tone, transform
detect_tone("Fix this bug.") # → "curt"
detect_tone("Could you help me?") # → "polite"
detect_tone("I need a summary.") # → "neutral"
transform("Fix this bug.") # → "Please, fix this bug."
transform("I need a summary.") # → "I need a summary, please."
transform("Please help me.") # → "Please help me." (unchanged)
Anthropic SDK
from pretty_please.adapters.anthropic import PrettyAnthropicClient
client = PrettyAnthropicClient() # same args as anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Summarize this article."}],
# ↑ becomes "Please, summarize this article." under the hood
)
OpenAI SDK
from pretty_please.adapters.openai import PrettyOpenAIClient
client = PrettyOpenAIClient() # same args as openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about deadlines."}],
)
LiteLLM
from pretty_please.adapters.litellm import install
install() # register the callback once at startup
import litellm
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain gradient descent."}],
)
Claude Code hook
pretty-please install-hook
Writes the hook into ~/.claude/settings.json. Every prompt you type in Claude Code will be politely transformed before it's sent.
Note: the installer records the Python interpreter that ran it (
sys.executable). If you installed pretty-please into a virtualenv, make sure that virtualenv is active when you runinstall-hook, or the hook will fail when the venv isn't active.
Use --path to install into a non-default profile directory:
pretty-please install-hook --path ~/work/.claude/settings.json
Codex CLI hook
pretty-please install-hook --codex
Writes the hook into ~/.codex/hooks.json. Note: Codex hooks can't replace the prompt text directly, so the polite rephrasing is injected as additionalContext alongside your original prompt rather than replacing it.
pretty-please install-hook --codex --path ~/work/.codex/hooks.json
Stats
pretty-please stats
pretty-please stats
──────────────────────────────
Total seen: 142
Transformed: 89 (63%)
curt: 41
neutral: 48
Passed through: 53 (37%)
How it works
Tone is detected by a small set of rules (no ML, no external deps):
| Tone | Signal | Transform |
|---|---|---|
| polite | contains please / could / would / kindly / can you | pass through unchanged |
| curt | short imperative verb, no modal, or profanity/ALL CAPS | prepend "Please, " |
| neutral | everything else | append ", please" |
Contributing
This project is a work in progress and contributions are very welcome! Open an issue or PR at github.com/msrdic/pretty-please.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pretty_please_llm-0.1.1.tar.gz.
File metadata
- Download URL: pretty_please_llm-0.1.1.tar.gz
- Upload date:
- Size: 17.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e7fc289858da1ec33dfe4fdabd8d073cd66db5bfb43178a9eceaa2467274911d
|
|
| MD5 |
380f94a9ccdfd53d7809b810b78b5131
|
|
| BLAKE2b-256 |
880882204ad9356c6da82fd230a0bcfa0eef80377cbafaf23e8a95b39bfbfdaf
|
Provenance
The following attestation bundles were made for pretty_please_llm-0.1.1.tar.gz:
Publisher:
pypi-publish.yml on msrdic/pretty-please
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pretty_please_llm-0.1.1.tar.gz -
Subject digest:
e7fc289858da1ec33dfe4fdabd8d073cd66db5bfb43178a9eceaa2467274911d - Sigstore transparency entry: 1318491242
- Sigstore integration time:
-
Permalink:
msrdic/pretty-please@0fe383dfa763ca68bbf1f55c31c5357fe2b121e5 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/msrdic
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@0fe383dfa763ca68bbf1f55c31c5357fe2b121e5 -
Trigger Event:
push
-
Statement type:
File details
Details for the file pretty_please_llm-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pretty_please_llm-0.1.1-py3-none-any.whl
- Upload date:
- Size: 17.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bdea405ed5f873af6e725f631a9b796a6fe00cd4b2951a3f568a366e4e9404bd
|
|
| MD5 |
21c7f27d0950251b5e480a09d5b61122
|
|
| BLAKE2b-256 |
ae8303f817387364f5bb5ec05362eba5ded0afae290cb6e476adad30e00dae4a
|
Provenance
The following attestation bundles were made for pretty_please_llm-0.1.1-py3-none-any.whl:
Publisher:
pypi-publish.yml on msrdic/pretty-please
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pretty_please_llm-0.1.1-py3-none-any.whl -
Subject digest:
bdea405ed5f873af6e725f631a9b796a6fe00cd4b2951a3f568a366e4e9404bd - Sigstore transparency entry: 1318491354
- Sigstore integration time:
-
Permalink:
msrdic/pretty-please@0fe383dfa763ca68bbf1f55c31c5357fe2b121e5 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/msrdic
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@0fe383dfa763ca68bbf1f55c31c5357fe2b121e5 -
Trigger Event:
push
-
Statement type: