LLM-capability CAPTCHA and obfuscated verification challenges for Python applications.
Project description
agentproof
agentproof is a Python library for LLM-capability CAPTCHA flows.
It issues obfuscated public challenges, expects a structured answer back, and verifies the answer
deterministically against the private server-side copy.
Install:
pip install agentproof-ai
Import:
import agentproof
What it is
Traditional CAPTCHA asks whether the client is human.
agentproof asks a different question:
Can this client recover and execute an obfuscated instruction in an LLM-like way?
That makes it useful for:
- LLM-first endpoints
- reverse-CAPTCHA experiments
- capability gates before access to an API
- local testing of challenge-response flows for agents
How it works
- Your server generates a challenge and keeps the private verification copy.
- You send the public challenge JSON to the client.
- The client returns structured JSON with
payload.answer. - Your server verifies the response and gets
ok: trueor an exact failure reason.
Quickest real example
from agentproof import AgentResponse, ChallengeSpec, generate_challenge, verify_response
challenge = generate_challenge(
ChallengeSpec(
challenge_type="obfuscated_text_lock",
difficulty=2,
options={"template": "amber_sort"},
)
)
public_challenge = challenge.to_dict()
# Send public_challenge to an LLM-capable client.
# For a local smoke test, simulate that client with the private expected answer.
response = AgentResponse(
challenge_id=challenge.challenge_id,
challenge_type=challenge.challenge_type,
payload={"answer": str(challenge.private_data["expected_answer"])},
)
result = verify_response(challenge, response)
assert result.ok
For a harder LLM-only variant, switch challenge_type to multi_pass_lock. That family adds
extra transformation steps on top of the same exact answer contract.
What the public challenge looks like
{
"challenge_id": "bb28567e201b35aa",
"challenge_type": "obfuscated_text_lock",
"prompt": "gl1tch//llm-cap-v1::d2\nfrag@f8 // D3c0d3 the driFted Br13f ANd 4N5w3r tHrOUgH Payload.answer 0NLY\nfrag@d8 %% d3CK: slOt5 v10l37 cIndEr\nfrag@f6 %% d3ck: sloT2 4Mb3R h4Rb0r\nfrag@c9 || task: 0rD3R thE kept 5h4Rd WOrdS By 5l07 numBer fr0m loW to h1gh\nfrag@b3 %% dEcK: slOt3 C0b4L7 sabLe\nfrag@d3 %% AnswEr ruLe: R37urn ThE 5H4rd W0rd5 in UpPercaSe aScii J01N3D WIth hYpheNs\nfrag@e2 || d3Ck: SLot4 4mb3R 51gn4L\nfrag@e5 ^^ tasK: keEp onLy ShArds cArrying the 4MB3r TAg\nfrag@e4 :: d3CK: slot1 4mB3r 3Mb3R\nreply via payload.answer only // structured-json",
"issued_at": "2026-03-07T02:58:20.639623+00:00",
"expires_at": "2026-03-07T03:00:20.639623+00:00",
"data": {
"difficulty": 2,
"profile": "llm_capability_v2",
"response_contract": {
"payload.answer": "UPPERCASE ASCII words joined with hyphens",
"payload.decoded_preview": "optional free-form notes"
}
},
"version": "1"
}
The matching client response looks like:
{
"challenge_id": "bb28567e201b35aa",
"challenge_type": "obfuscated_text_lock",
"payload": {
"answer": "EMBER-HARBOR-SIGNAL",
"decoded_preview": "kept amber shards ordered by slot"
}
}
And verification returns:
{
"ok": true,
"reason": "ok",
"details": {
"answer": "EMBER-HARBOR-SIGNAL",
"template_id": "amber_sort",
"difficulty": 2
}
}
Built-in challenge types
| Challenge type | Role | Built-in solver |
|---|---|---|
obfuscated_text_lock |
Primary obfuscated LLM challenge with stronger prompt patterns | No |
multi_pass_lock |
Harder multi-step obfuscated LLM challenge | No |
proof_of_work |
Deterministic compute baseline | Yes |
semantic_math_lock |
Readable exact-constraint baseline | Yes |
The two *_lock LLM families are meant to be solved by an external LLM-capable client, not by a
bundled reference solver. Both require payload.answer to be uppercase ASCII words joined with
hyphens. obfuscated_text_lock also accepts optional payload.decoded_preview.
CLI
Baseline challenge roundtrip:
agentproof generate proof_of_work --difficulty 16 --output challenge.json
agentproof solve challenge.json --output response.json
agentproof verify challenge.json response.json
Obfuscated challenge flow:
agentproof generate obfuscated_text_lock \
--difficulty 2 \
--template amber_sort \
--output challenge.internal.json \
--public-output challenge.public.json
Use challenge.public.json for the client and keep challenge.internal.json server-side for
verification.
Harder multi-pass flow:
agentproof generate multi_pass_lock \
--difficulty 2 \
--template warm_reverse_length \
--output challenge.internal.json \
--public-output challenge.public.json
Benchmark the non-LLM baselines:
agentproof benchmark multi_pass_lock \
--iterations 25 \
--difficulty 2 \
--template warm_reverse_length
Or from Python:
from agentproof import run_benchmark
report = run_benchmark(
challenge_type="obfuscated_text_lock",
iterations=25,
difficulty=2,
template="amber_sort",
)
print(report.to_dict())
Demo
Run the local demo:
uv run python demo/app.py
Then open http://127.0.0.1:8765.
The demo centers the LLM challenge flows and lets you paste a real LLM response into the browser before verifying it.
What this proves
agentproof is best used to prove:
- the client can recover intent from obfuscated text
- the client can return exact structured output
- the response can be checked deterministically on your server
What this does not prove
agentproof does not prove:
- model identity
- provider provenance
- hardware-backed execution
- protection against every scripted solver
It is an LLM-capability CAPTCHA library, not an identity system.
Development
uv sync --extra dev --extra docs --extra demo
uv run ruff check .
uv run mypy .
uv run pytest
uv run python -m build
uv run mkdocs build --strict
Links
- PyPI: https://pypi.org/project/agentproof-ai/
- Docs: https://bnovik0v.github.io/agentproof/
- Demo: demo/README.md
- Contributing: CONTRIBUTING.md
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentproof_ai-0.3.0.tar.gz.
File metadata
- Download URL: agentproof_ai-0.3.0.tar.gz
- Upload date:
- Size: 115.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e683d930ba944366ce5f85175405998eaf8ab6e7f2f848735255b65750578019
|
|
| MD5 |
a3af79891c9928a1e55c4b30ac0b30e3
|
|
| BLAKE2b-256 |
8c560db89f466aea015c73f0b137baf50c82187984b95ec2660ea2174c1d5a55
|
Provenance
The following attestation bundles were made for agentproof_ai-0.3.0.tar.gz:
Publisher:
release.yml on bnovik0v/agentproof
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentproof_ai-0.3.0.tar.gz -
Subject digest:
e683d930ba944366ce5f85175405998eaf8ab6e7f2f848735255b65750578019 - Sigstore transparency entry: 1054577947
- Sigstore integration time:
-
Permalink:
bnovik0v/agentproof@baff35cbb256bc6dfebf1764955d32fb933a0407 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/bnovik0v
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@baff35cbb256bc6dfebf1764955d32fb933a0407 -
Trigger Event:
push
-
Statement type:
File details
Details for the file agentproof_ai-0.3.0-py3-none-any.whl.
File metadata
- Download URL: agentproof_ai-0.3.0-py3-none-any.whl
- Upload date:
- Size: 26.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
309e7f7fdfcf8f895151db7a7f389fea679e11e62ae6a37664646b74498f06cf
|
|
| MD5 |
7d8525a3cdcf6778f2462d2f149e02d4
|
|
| BLAKE2b-256 |
4d639e88b6d46b71a801ca06d6f906d9471c75c59635a47847e67a688d8d2d7a
|
Provenance
The following attestation bundles were made for agentproof_ai-0.3.0-py3-none-any.whl:
Publisher:
release.yml on bnovik0v/agentproof
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentproof_ai-0.3.0-py3-none-any.whl -
Subject digest:
309e7f7fdfcf8f895151db7a7f389fea679e11e62ae6a37664646b74498f06cf - Sigstore transparency entry: 1054577953
- Sigstore integration time:
-
Permalink:
bnovik0v/agentproof@baff35cbb256bc6dfebf1764955d32fb933a0407 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/bnovik0v
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@baff35cbb256bc6dfebf1764955d32fb933a0407 -
Trigger Event:
push
-
Statement type: