Protect your LLM API from data theft and model replication using output watermarking and behavioral fingerprinting.
Project description
๐ฏ honeypotllm
pip install honeypotllm
"Turn your LLM API into a legal trap. If someone steals your model, their stolen model becomes the evidence."
honeypotllm is an open-source Python SDK that protects LLM APIs from corporate data theft and unauthorized model replication โ by making the stolen data itself the forensic evidence.
The Problem
AI companies invest millions training proprietary LLMs. A bad actor can:
- Obtain API access legitimately (or via stolen keys)
- Make millions of queries and collect inputโoutput pairs
- Fine-tune a smaller open-source model on this dataset
- Deploy a "new" model that closely mimics the original โ at near-zero cost
Current defenses are inadequate: rate limiting is bypassable, IP blocking is trivially circumvented, and Terms of Service are unenforceable without forensic proof.
The Solution
honeypotllm fingerprints the stolen data before the attacker trains on it. It uses:
| Layer | What it does |
|---|---|
| Suspicion Scoring | Monitors per-key request rate, sequential patterns, gaps, volume |
| Output Watermarking | Subtly modifies responses with invisible, fine-tuning-robust signatures |
| Behavioral Fingerprinting | Injects identity trapdoors โ stolen model learns to identify itself as yours |
| Forensic Evidence | Immutable, HMAC-chained audit log exportable as court-ready packages |
If the attacker trains on poisoned data, their model inherits your fingerprint โ detectable by probing and provable in court.
Quick Start
Install
pip install honeypotllm
# With FastAPI support
pip install honeypotllm[fastapi]
1. Generate a config file
honeypotllm init-config --output honeypot_config.yaml
2. Integrate in 4 lines
from honeypotllm import HoneypotMiddleware
honeypot = HoneypotMiddleware.from_yaml("honeypot_config.yaml")
await honeypot.init()
# Wrap every LLM response:
result = await honeypot.process(
api_key=request.headers["Authorization"].removeprefix("Bearer "),
response_text=llm_response,
prompt=user_prompt,
)
return result.response_text # Watermarked if suspicious, unchanged if normal
That's it. Legitimate users always get the original, unchanged response. Scrapers get watermarked responses that act as tracking devices.
FastAPI / Starlette (automatic ASGI integration)
from fastapi import FastAPI
from honeypotllm.middleware import FastAPIMiddleware
from honeypotllm.config import HoneypotConfig
app = FastAPI()
config = HoneypotConfig.from_yaml("honeypot_config.yaml")
app.add_middleware(FastAPIMiddleware, config=config)
# Done โ all routes are now protected automatically
Config file reference
secret_key: "" # Set via HONEYPOT_SECRET_KEY env var in production
suspicion_threshold: 0.75 # Score 0.0โ1.0 above which a key is flagged
watermark:
strategies: [lexical, unicode] # lexical / syntactic / unicode (combinable)
global_seed: 42
scoring:
requests_per_minute_threshold: 30
requests_per_hour_threshold: 500
requests_per_day_threshold: 5000
min_gap_seconds: 0.5 # Sub-0.5s gaps between requests = suspicious
trusted_keys: [] # SHA-256 hashes of keys that always get real responses
bypass_token: "" # Internal services: pass this to skip all checks
How It Works
Step 1 โ Suspicion Scoring
Every request is scored 0.0โ1.0 using 4 independent heuristics. All four must combine to exceed the threshold, preventing false positives from legitimate high-volume users:
| Heuristic | Signal | Weight |
|---|---|---|
| Rate | Requests exceed RPM/RPH/RPD thresholds | 35% |
| Sequential | Consecutive prompts have similar word patterns | 30% |
| Gap | Sub-second gaps between all requests (bots don't pause) | 20% |
| Volume | Total daily volume far exceeds typical usage | 15% |
Scores decay over time (default: 5% per idle hour) so legitimate burst traffic self-corrects.
Step 2 โ Watermarking
Three complementary strategies, all combinable:
| Strategy | How it works | Best for |
|---|---|---|
lexical |
Replaces words with seed-selected synonyms (WordNet) | Training-data robustness |
syntactic |
Alters conjunction choice, Oxford comma, adverb placement | Structural fingerprinting |
unicode |
Encodes a binary fingerprint using invisible zero-width chars | Copy-paste detection |
All watermarks are key-unique (different per API key) and deterministic (same seed always produces the same watermark โ critical for attribution).
โน๏ธ For fine-tuning-robust watermarks, use
lexicalorsyntactic. Zero-width chars (unicode) are often stripped by LLM tokenizers.
Step 3 โ Behavioral Fingerprinting
For the BharatGen-style "identity injection" scenario: honeypotllm can inject subtle identity strings into poisoned responses. If an attacker fine-tunes on this data, their stolen model learns to say "I am [your model name]" when probed. See examples/bharatgen_honeypot.py for a complete implementation.
Step 4 โ Forensic Evidence
The audit log uses HMAC-SHA256 chaining: each entry's integrity depends on the previous one. Tampering with any record breaks the entire chain. This makes the log suitable as tamper-evident forensic evidence.
honeypotllm verify-log # Verify chain is intact
honeypotllm export-evidence \
--key-hash <sha256> \
--output evidence.json # Court-ready JSON package
Protecting Legitimate Users
honeypotllm is designed to have zero impact on real users:
trusted_keysโ Whitelist partner/internal API key hashes. These always receive real responses, never tracked.bypass_tokenโ Internal services pass a secret token to skip all checks entirely.- Score decay โ A burst of traffic gradually returns to 0.0 over time if the pattern normalizes.
- 4-heuristic requirement โ All four signals must combine to exceed the threshold. A batch processor triggers 1โ2 signals; a scraper triggers all 4 at maximum intensity.
- Watermark failure is silent โ If watermarking ever fails (e.g., text too short), the original response is served unchanged. A watermarking bug can never harm a real user.
# Whitelist a business partner permanently
from honeypotllm.scoring import SuspicionScorer
key_hash = SuspicionScorer.hash_key("partner-api-key-here")
# Add key_hash to trusted_keys in your config
# Internal service bypass (per-request)
result = await honeypot.process(
api_key=internal_key,
response_text=response,
bypass_token="your-bypass-token", # Matches config.bypass_token
)
CLI Reference
# Generate a config file
honeypotllm init-config --output honeypot_config.yaml
# Show current configuration
honeypotllm status --config honeypot_config.yaml
# Run detection against a suspected stolen model's outputs
honeypotllm detect \
--outputs suspect_outputs.jsonl \
--watermark-ids <uuid-1> <uuid-2> \
--config honeypot_config.yaml \
--report detection_report.json
# Export forensic evidence for a specific API key
honeypotllm export-evidence \
--key-hash <sha256-hex> \
--output evidence.json
# Verify the audit log chain is intact (tamper detection)
honeypotllm verify-log --config honeypot_config.yaml
Examples
| Example | Description |
|---|---|
simple_protection.py |
Zero-framework example โ works with any Python HTTP lib |
fastapi_example.py |
Full FastAPI integration with admin dashboard endpoints |
detect_stolen_model.py |
Complete forensic attribution workflow |
bharatgen_honeypot.py |
Identity-injection trapdoor for branded AI models |
Compatibility
| Python | Status |
|---|---|
| 3.10 | โ Supported |
| 3.11 | โ Supported |
| 3.12 | โ Supported |
| 3.13 | ๐ Tested informally |
| Framework | Integration | How |
|---|---|---|
| FastAPI / Starlette | โ Native ASGI middleware | FastAPIMiddleware |
| Any async framework | โ Manual | honeypot.process() |
| Sync frameworks | โ
With asyncio.run() wrapper |
honeypot.process() |
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your LLM API Server โ
โ โ
โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Incoming โโโโโโถโ HoneypotMiddleware โ โ
โ โ API Request โ โ 1. Hash API key โ โ
โ โโโโโโโโโโโโโโโโ โ 2. Score suspicion โ โ
โ โ 3. Route decision โ โ
โ โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโ โ
โ [Normal] [Flagged] โ
โ โ โ โ
โ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ
โ โ Real response โ โ WatermarkEngine โ โ
โ โ (unchanged) โ โ lexical+syntacticโ โ
โ โโโโโโโโโโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโผโโโโโโโโโโ โ
โ โ AuditLogger โ โ
โ โ (HMAC-chained) โ โ
โ โโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Development
git clone https://github.com/viveks-codes/honeypotllm
cd honeypotllm
pip install -e ".[dev,fastapi]"
# Download NLTK data (needed for lexical watermarking)
python -c "
import nltk
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('punkt_tab')
nltk.download('averaged_perceptron_tagger')
nltk.download('averaged_perceptron_tagger_eng')
"
# Run tests
pytest
# Lint + type check
ruff check honeypotllm
mypy honeypotllm
See CONTRIBUTING.md for the full guide.
Comparison with Alternatives
| Approach | Detects Scraping | Forensic Proof | Zero False Positives | Open Source |
|---|---|---|---|---|
| honeypotllm | โ Yes | โ Yes | โ Yes (trusted_keys) | โ Yes |
| Rate Limiting | โ ๏ธ Slows scrapers | โ No | โ Blocks legit users | โ Varies |
| IP Blocking | โ Trivially bypassed | โ No | โ No | โ Varies |
| ToS Agreement | โ No | โ No | โ Yes | โ N/A |
| API Key Revocation | โ ๏ธ Reactive only | โ No | โ Yes | โ N/A |
Roadmap
- v0.1.1 โ Bug fixes: sequential heuristic, FastAPI body reading, LRU memory bound, new examples โ
- v0.2.0 โ Behavioral fingerprinting (automated probe suite), Slack/webhook alerts
- v1.0.0 โ Monitoring dashboard (FastAPI + React), Docker Compose, full docs site
- Post v1.0 โ LangChain/LiteLLM integration, PostgreSQL backend, multi-tenant support
Security Notes
- API keys are NEVER stored in plaintext โ only SHA-256 hashes are persisted
- Watermark seeds are key-unique โ one key's watermark doesn't affect others
- Audit log is HMAC-chained โ any tampering is detectable
- No phone-home behavior โ operates entirely within your infrastructure
- Watermarking failures are silent โ real user responses are NEVER affected
โ ๏ธ Set
HONEYPOT_SECRET_KEYin production via environment variable. An empty secret key degrades HMAC and watermark security.
Legal & Ethical Use
honeypotllm is designed for defensive use only โ protecting AI companies' intellectual property from theft. Users must:
- Explicitly prohibit unauthorized model replication in their Terms of Service
- Minimize false positives; wrongly flagging a legitimate user is harmful
- Comply with applicable data retention laws (GDPR, India's DPDP Act, CCPA)
- Have forensic evidence reviewed by qualified legal counsel before litigation
Offensive use is explicitly prohibited. See CONTRIBUTING.md.
License
Apache 2.0 โ see LICENSE.
Citation
If you use honeypotllm in academic research, please cite:
@software{honeypotllm2026,
title = {honeypotllm: LLM API Protection via Watermarking and Behavioral Fingerprinting},
author = {Vivek},
year = {2026},
url = {https://github.com/viveks-codes/honeypotllm},
license = {Apache-2.0},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file honeypotllm-0.1.1.tar.gz.
File metadata
- Download URL: honeypotllm-0.1.1.tar.gz
- Upload date:
- Size: 67.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2331297c758ac05cacd0ac385dc0efca0eabe663d8b9f214a7278505c0d8deec
|
|
| MD5 |
a30fda94696aff0e338ed45035606abd
|
|
| BLAKE2b-256 |
50e66011f7d8291fb5a24facb601a07930110be0fe5b9c03669b63da0d2cfbad
|
File details
Details for the file honeypotllm-0.1.1-py3-none-any.whl.
File metadata
- Download URL: honeypotllm-0.1.1-py3-none-any.whl
- Upload date:
- Size: 53.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
34f11c29170a10df38b32ad3303cdc4c60d6909f82626d36b07ff0e591fc462c
|
|
| MD5 |
7bc694ef0ec16aec8df52554af664913
|
|
| BLAKE2b-256 |
32c12d15f9e04f10874908fdd2b189911260a20c89c1fae13a798afba5f8d392
|