Official Python client for peyeeye.ai — PII redaction & rehydration for LLM prompts.
Project description
peyeeye
Official Python client for peyeeye.ai — redact PII on the way into your LLM prompts and rehydrate it on the way out.
- Homepage: https://peyeeye.ai
- API reference: https://peyeeye.ai/docs
- PyPI: https://pypi.org/project/peyeeye/
pip install peyeeye
Python 3.9+. Single runtime dependency: httpx. Fully type-hinted (py.typed).
Quickstart
import os
from peyeeye import Peyeeye
from anthropic import Anthropic
peyeeye = Peyeeye(api_key=os.environ["PEYEEYE_KEY"])
claude = Anthropic()
with peyeeye.shield() as shield:
safe = shield.redact("Hi, I'm Ada, ada@a-e.com")
reply = claude.messages.create(
model="claude-sonnet-*",
max_tokens=256,
messages=[{"role": "user", "content": safe}],
)
print(shield.rehydrate(reply.content[0].text))
shield() opens a session, redacts, and cleans up on exit. Inside the block,
the same real value always maps to the same token — Ada Lovelace is always
[PERSON_1] — and tokens never leak across sessions.
Low-level calls
Skip the shield helper when you need more control:
r = peyeeye.redact("Card: 4242 4242 4242 4242")
# r.redacted → "Card: [CARD_1]"
# r.session → "ses_…"
# r.entities → [DetectedEntity(token="[CARD_1]", type="CARD", span=(6, 25), confidence=0.99)]
clean = peyeeye.rehydrate("Confirmation for [CARD_1].", session=r.session)
# clean.text → "Confirmation for 4242 4242 4242 4242."
Stateless sealed mode
Pass stateless=True and peyeeye never stores the mapping — the redact
response carries a sealed skey_… blob you hand back to rehydrate. Nothing
lives on the server between calls.
with peyeeye.shield(stateless=True) as shield:
safe = shield.redact("Email ada@a-e.com")
clean = shield.rehydrate("Reply: [EMAIL_1]")
# shield.rehydration_key is the skey_... blob, if you need to persist it
Or with raw calls:
r = peyeeye.redact("Email ada@a-e.com", session="stateless")
# r.rehydration_key → "skey_…"
clean = peyeeye.rehydrate("[EMAIL_1] received.", session=r.rehydration_key)
Streaming rehydration
When piping an LLM token stream straight to a user, naive rehydration breaks
on mid-token boundaries. rehydrate_chunk() buffers partial tokens across
chunks; call flush() once upstream closes.
with peyeeye.shield() as shield:
safe = shield.redact(prompt)
for chunk in your_llm_stream(safe):
sys.stdout.write(shield.rehydrate_chunk(chunk))
sys.stdout.write(shield.flush())
Never call flush() while the stream is still delivering chunks — you'll emit
a half-formed placeholder.
Streaming redact (SSE)
For the /v1/redact/stream endpoint (Build plan and higher):
for event in peyeeye.redact_stream(["Hi, I'm Ada", " — card 4242 4242 4242 4242"]):
if event.event == "session":
session_id = event.data["session"]
elif event.event == "redacted":
print(event.data["text"])
elif event.event == "done":
print("chars:", event.data["chars"])
Custom detectors
peyeeye.create_entity(
id="ORDER_ID",
kind="regex",
pattern=r"#A-\d{6,}",
examples=["#A-884217", "#A-007431"],
confidence_floor=0.9,
)
# dry-run a pattern before saving
peyeeye.test_pattern(pattern=r"#A-\d{6,}", text="ref #A-884217 and #A-1")
# → TestPatternResponse(count=1, matches=[PatternMatch(value="#A-884217", ...)])
# inspect / update / retire
peyeeye.list_entities()
peyeeye.update_entity("ORDER_ID", enabled=False)
peyeeye.delete_entity("ORDER_ID")
# starter templates (Twilio SIDs, Stripe keys, AWS access keys, etc.)
for tpl in peyeeye.entity_templates():
print(tpl.id, tpl.pattern)
Sessions
peyeeye.get_session("ses_…") # SessionInfo
peyeeye.delete_session("ses_…") # drop immediately
Errors
Every non-2xx response raises PeyeeyeError with .code, .status,
.message, and .request_id. 429 and 5xx responses are retried with
exponential backoff (Retry-After honoured); terminal errors raise
immediately.
from peyeeye import PeyeeyeError
try:
peyeeye.redact("…")
except PeyeeyeError as e:
if e.code == "rate_limited":
...
elif e.code == "forbidden":
...
else:
raise
Configuration
Peyeeye(
api_key="pk_live_…",
base_url="https://api.peyeeye.ai",
timeout=30.0,
max_retries=3,
)
For CI / air-gapped use, Peyeeye(transport=httpx.MockTransport(handler))
lets you mount a mock transport without monkey-patching.
Method reference
| Method | HTTP | Purpose |
|---|---|---|
peyeeye.redact(text, ...) |
POST /v1/redact |
Redact PII; returns token stream + session. |
peyeeye.rehydrate(text, session=...) |
POST /v1/rehydrate |
Substitute tokens back. Accepts ses_… or skey_…. |
peyeeye.redact_stream(chunks, ...) |
POST /v1/redact/stream (SSE) |
Stream-safe redact. |
peyeeye.get_session(id) |
GET /v1/sessions/{id} |
Inspect mapping metadata. |
peyeeye.delete_session(id) |
DELETE /v1/sessions/{id} |
Evict a session. |
peyeeye.list_entities() |
GET /v1/entities |
Built-ins + your custom detectors. |
peyeeye.create_entity(...) |
POST /v1/entities |
Custom detector. |
peyeeye.update_entity(id, ...) |
PATCH /v1/entities/{id} |
Toggle / tweak. |
peyeeye.delete_entity(id) |
DELETE /v1/entities/{id} |
Retire. |
peyeeye.test_pattern(pattern, text) |
POST /v1/entities/test |
Dry-run a regex. |
peyeeye.entity_templates() |
GET /v1/entities/templates |
Starter patterns. |
Full request / response schemas: https://peyeeye.ai/docs.
Using this SDK from an AI coding assistant
Drop these into your agent's context. Each snippet is self-contained and compiles as-is.
# Install
# pip install peyeeye
from peyeeye import Peyeeye, PeyeeyeError
import os
client = Peyeeye(api_key=os.environ["PEYEEYE_KEY"]) # or explicit base_url
# Round-trip: redact → call LLM → rehydrate (session-scoped)
with client.shield() as shield:
safe = shield.redact("Hi, I'm Ada, ada@a-e.com")
# ... send `safe` to the LLM, get `reply` back ...
out = shield.rehydrate(reply)
# Stateless (zero server-side state; key is yours to persist)
with client.shield(stateless=True) as shield:
safe = shield.redact("...")
key = shield.rehydration_key # skey_...
clean = shield.rehydrate("[EMAIL_1] confirmed.")
# Low-level one-shot
r = client.redact("Card 4242 4242 4242 4242")
clean = client.rehydrate("Receipt: [CARD_1].", session=r.session)
# Error handling
try:
client.redact(text)
except PeyeeyeError as e:
# e.code ∈ {"rate_limited","forbidden","invalid_request","server_error", ...}
# e.status, e.message, e.request_id
raise
Endpoint envelope: all requests use Authorization: Bearer <api_key> against
https://api.peyeeye.ai/v1/*. Errors follow {code, message, request_id}
and surface as PeyeeyeError. Responses are plain JSON (dataclasses via
from_dict).
Do: reuse one Peyeeye(...) per process; call .close() or use it as a
context manager at shutdown.
Don't: open a new client per request, call flush() mid-stream, or parse
skey_ blobs yourself — the API opens them.
License
MIT.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file peyeeye-1.0.1.tar.gz.
File metadata
- Download URL: peyeeye-1.0.1.tar.gz
- Upload date:
- Size: 18.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
019296e59dceb923acd08d399aa1adb12af91670ce74c417912b605546ad94f7
|
|
| MD5 |
79c553d4a7b28241bc29116494208587
|
|
| BLAKE2b-256 |
e403bfefd018c0dc82057a451e38acd8fe783fc1cc3a918cf08271bc80da5998
|
File details
Details for the file peyeeye-1.0.1-py3-none-any.whl.
File metadata
- Download URL: peyeeye-1.0.1-py3-none-any.whl
- Upload date:
- Size: 14.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4d1bdf13188a51877565fc14737565ff05a43b385e33fbe5d8128f43ad6ed86b
|
|
| MD5 |
25aa0ffcbcffcdf78140de2c8979c0d2
|
|
| BLAKE2b-256 |
c9e9e75372a6bec15778574295142ad306653056dd5df0177b3bda37b56991b4
|