Tiny, zero-dependency crawler detection via regex.
Project description
is-crawler
Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.
Docs & live demo: is-crawler.tn3w.dev
Install
pip install is-crawler
Usage
from is_crawler import is_crawler, crawler_info, crawler_has_tag
ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"
is_crawler(ua) # True
crawler_has_tag(ua, "search-engine") # True
crawler_has_tag(ua, ["ai-crawler", "seo"]) # False
info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
# description="Google's main web crawling bot...",
# tags=('search-engine',))
info.url # 'http://www.google.com/bot.html'
info.description # "Google's main web crawling bot for search indexing"
info.tags # ('search-engine',)
The module is also callable directly, no named import required:
import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)") # True
API reference
| Function | Returns | Description |
|---|---|---|
is_crawler(ua) |
bool |
Heuristic detection: fast, no DB |
crawler_signals(ua) |
list[str] |
Which heuristic rules matched |
crawler_info(ua) |
CrawlerInfo | None |
url, description, tags: DB lookup for 646 known crawlers |
crawler_has_tag(ua, tag) |
bool |
tag can be str or list[str] (matches any) |
crawler_name(ua) |
str | None |
Product name extracted from the UA string |
crawler_version(ua) |
str | None |
Version extracted from the UA string |
crawler_url(ua) |
str | None |
URL embedded in the UA string |
crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.
crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.
Middleware example
from is_crawler import is_crawler, crawler_has_tag
@app.before_request
def gate():
ua = request.headers.get("User-Agent", "")
if crawler_has_tag(ua, "ai-crawler"):
abort(403) # block AI scrapers
if is_crawler(ua):
log_crawler(ua) # track other bots without blocking
How it works
is_crawler: three-step short-circuit, no DB lookup.
- Positive signal: one fused regex combining bot keywords (
bot,crawl,spider,scrape,headless, ...), known tools (playwright,selenium,wget,lighthouse,sqlmap, ...), and URL-in-UA patterns. One hit → crawler. - No browser signature: missing
WebKit/Gecko/Trident/etc. → crawler. - Bare
(compatible; ...): classic bot block without OS tokens → crawler.
crawler_signals: exposes which of the five individual checks fired (bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua). Useful for diagnostics; is_crawler does not call it.
crawler_info / crawler_has_tag: gated by is_crawler so browser UAs skip the DB entirely. On a crawler hit, patterns (from monperrus/crawler-user-agents plus supplemental tags) are sharded into 48-entry chunks; each chunk's combined filter and its per-pattern regexes compile lazily on first match. Returns url, description, and tags.
Caching: 32k-entry LRU cache on every public function. Repeat UAs hit in ~40 ns.
Benchmarks
Measured on Python 3.14, Linux x86_64. Fixture corpus: 1 231 crawler UAs and 15 812 browser UAs.
cua is the crawler-user-agents PyPI package (v1.42, no caching).
is_crawler: heuristic detection (no DB)
| Corpus | is_crawler | cua.is_crawler |
speedup |
|---|---|---|---|
| crawlers only | 0.39 µs | 61.7 µs | 158× |
| browsers only | 3.76 µs | 182.7 µs | 49× |
| mixed | 0.04 µs | 167.7 µs | 4000× |
Crawler UAs hit a single combined positive regex; browser UAs fall through to a browser-signature check. A 32k-entry LRU cache drives mixed-corpus calls to near-zero amortized cost.
crawler_info / crawler_has_tag: DB pattern lookup
| is_crawler | cua equivalent |
speedup | |
|---|---|---|---|
crawler_info |
0.53 µs | 743.0 µs | 1400× |
crawler_has_tag |
0.14 µs | - | - |
Browser UAs short-circuit via is_crawler before touching the DB. Matching walks 48-entry combined chunks to locate the winning pattern in ~1/25 of the full-scan work. crawler_has_tag delegates to cached crawler_info; cost is independent of tag cardinality.
Cold-start
| Module | Cold-start | Notes |
|---|---|---|
is_crawler |
0.55 ms | JSON parse; regexes stay lazy |
crawleruseragents |
0.89 ms | JSON parse |
DB patterns compile lazily per 48-entry chunk on first match, import and _ensure_db stay cheap.
Single-UA uncached benchmark
| is_crawler | cua.is_crawler |
speedup | |
|---|---|---|---|
Googlebot uncached |
1.710 µs | 70.419 µs | 41× |
Direct apples-to-apples check on one crawler UA. Same Googlebot/2.1 (+http://www.google.com/bot.html) over 1,000 runs. is_crawler clears caches before each call; cua reloads crawleruseragents before each call.
Formatting
pip install black isort
isort . && black .
npx prtfm
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file is_crawler-1.3.3.tar.gz.
File metadata
- Download URL: is_crawler-1.3.3.tar.gz
- Upload date:
- Size: 32.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3bbf756a30ac6b1dafc85862e9115f81164ce60841e35c1a25fe205da860f6b0
|
|
| MD5 |
397ea5769a1b5739d509ed1d2784a864
|
|
| BLAKE2b-256 |
9b522fc1898cec7d35e8ff4e282acfe12e8bc9a94982a22b82cccb1fdb1cf032
|
Provenance
The following attestation bundles were made for is_crawler-1.3.3.tar.gz:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.3.3.tar.gz -
Subject digest:
3bbf756a30ac6b1dafc85862e9115f81164ce60841e35c1a25fe205da860f6b0 - Sigstore transparency entry: 1330232176
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@a1dc16e088162e0811d82e2f079c2808bfac5d40 -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@a1dc16e088162e0811d82e2f079c2808bfac5d40 -
Trigger Event:
push
-
Statement type:
File details
Details for the file is_crawler-1.3.3-py3-none-any.whl.
File metadata
- Download URL: is_crawler-1.3.3-py3-none-any.whl
- Upload date:
- Size: 27.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e3e77ac92cbe922887a93621496c535b043dba0bb03433e6d399411c8b0002a5
|
|
| MD5 |
af577024cb932c2e2093beae5180a7e7
|
|
| BLAKE2b-256 |
89b957f389e988f79d2af2c377bc5b87e38a00adb061bfc575f45a0b6ac7db17
|
Provenance
The following attestation bundles were made for is_crawler-1.3.3-py3-none-any.whl:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.3.3-py3-none-any.whl -
Subject digest:
e3e77ac92cbe922887a93621496c535b043dba0bb03433e6d399411c8b0002a5 - Sigstore transparency entry: 1330232284
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@a1dc16e088162e0811d82e2f079c2808bfac5d40 -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@a1dc16e088162e0811d82e2f079c2808bfac5d40 -
Trigger Event:
push
-
Statement type: