Tiny, zero-dependency crawler detection via regex.
Project description
is-crawler
Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.
Docs & live demo: is-crawler.tn3w.dev
Install
pip install is-crawler
For faster regex matching, optionally install google-re2. It will be used automatically when available:
pip install is-crawler google-re2
Usage
from is_crawler import is_crawler, crawler_info, crawler_has_tag
ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"
is_crawler(ua) # True
crawler_has_tag(ua, "search-engine") # True
crawler_has_tag(ua, ["ai-crawler", "seo"]) # False
info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
# description="Google's main web crawling bot for search indexing",
# tags=['search-engine'])
info.url # 'http://www.google.com/bot.html'
info.description # "Google's main web crawling bot for search indexing"
info.tags # ['search-engine']
The module is also callable directly, no named import required:
import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)") # True
API reference
| Function | Returns | Description |
|---|---|---|
is_crawler(ua) |
bool |
Heuristic detection: fast, no DB |
crawler_signals(ua) |
list[str] |
Which heuristic rules matched |
crawler_info(ua) |
CrawlerInfo | None |
url, description, tags: DB lookup for 646 known crawlers |
crawler_has_tag(ua, tag) |
bool |
tag can be str or list[str] (matches any) |
crawler_name(ua) |
str | None |
Product name extracted from the UA string |
crawler_version(ua) |
str | None |
Version extracted from the UA string |
crawler_url(ua) |
str | None |
URL embedded in the UA string |
crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.
crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.
Middleware example
from is_crawler import is_crawler, crawler_has_tag
@app.before_request
def gate():
ua = request.headers.get("User-Agent", "")
if crawler_has_tag(ua, "ai-crawler"):
abort(403) # block AI scrapers
if is_crawler(ua):
log_crawler(ua) # track other bots without blocking
How it works
is_crawler / crawler_signals: five heuristic regex checks, no lookup:
- Bot signals: common keywords (
bot,crawl,spider,scrape, ...), URL/email patterns,headless - Missing browser signature: real browsers always include engine tokens like
WebKit,Gecko, orTrident - Bare
(compatible; ...)block: classic bot pattern without OS tokens - Known tools:
playwright,selenium,wget,lighthouse,sqlmap, and more - URL in UA: an embedded
http://orhttps://URL, a near-universal bot convention
crawler_info / crawler_has_tag: pattern database loaded lazily on first call, built from monperrus/crawler-user-agents with supplemental tags. Returns url, description, and tags for 646 known crawlers.
Benchmarks
Measured on Python 3.14, Linux x86_64, stdlib re and google-re2 backends.
Fixture corpus: 1 231 crawler UAs and 15 812 browser UAs from the test suite.
Each function is timed over the full mixed corpus (17 043 UAs) with lru_cache(8192) warm, so results reflect the realistic cache-hit/miss ratio at that corpus size.
crawleruseragents is the crawler-user-agents PyPI package (v1.42, no caching).
is_crawler: heuristic detection (no DB)
| Corpus | stdlib re | re2 | cua.is_crawler |
vs re2 | vs cua |
|---|---|---|---|---|---|
| crawlers only | 0.78 µs | 0.26 µs | 61.7 µs | 3× | 79× |
| browsers only | 73.1 µs | 25.2 µs | 173.3 µs | 2.9× | 2.4× |
| mixed | 30.8 µs | 23.8 µs | 163.7 µs | 1.3× | 5.3× |
Speedups are re2 vs stdlib re and is_crawler (stdlib re) vs cua respectively.
is_crawler is purely heuristic: no DB involved. Crawler UAs are fast because the first heuristic (bot_signal) triggers immediately; browser UAs must exhaust all five checks.
crawler_info: DB pattern lookup
| stdlib re | re2 | cua equivalent |
vs cua | |
|---|---|---|---|---|
| mixed corpus | 75.7 µs | 75.3 µs | 1 172 µs | 15× |
cua equivalent = matching_crawlers(ua)[0] + CRAWLER_USER_AGENTS_DATA[idx] lookup. DB scan timing is dominated by pattern matching, not re2 compile; re2 gives no gain here because most UAs are browsers that scan all 646 patterns before returning None.
crawler_has_tag
| Tag | stdlib re | re2 | Patterns checked |
|---|---|---|---|
seo (252 entries) |
49.5 µs | 49.6 µs | ≤ 252 |
ai-crawler (22 entries) |
5.4 µs | 5.4 µs | ≤ 22 |
Tag index built at load time; only patterns that carry the requested tag are searched.
Cold-start (JSON parse + 646 re.compile calls)
| stdlib re | re2 | crawleruseragents |
|
|---|---|---|---|
| time | 36.2 ms | 35.7 ms | 3.5 ms |
crawleruseragents is faster to load because it ships pre-compiled data with no regex work at import time.
Formatting
pip install black isort
isort . && black .
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file is_crawler-1.1.3.tar.gz.
File metadata
- Download URL: is_crawler-1.1.3.tar.gz
- Upload date:
- Size: 33.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01b60b99094ab5c7f3ec7ef866303d4a3875722e1b6ba5d09f33d6a107aaadcb
|
|
| MD5 |
d6c137048d6db6c9cf14314bd2ba7217
|
|
| BLAKE2b-256 |
1feeff4b54f9481016cb74b1f6b369b05453e0e8b7c2a32d4c7a00fcc0e1228e
|
Provenance
The following attestation bundles were made for is_crawler-1.1.3.tar.gz:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.1.3.tar.gz -
Subject digest:
01b60b99094ab5c7f3ec7ef866303d4a3875722e1b6ba5d09f33d6a107aaadcb - Sigstore transparency entry: 1293602241
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@97989c59ce5c1d64542c9d731ad36d9450b7692d -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@97989c59ce5c1d64542c9d731ad36d9450b7692d -
Trigger Event:
push
-
Statement type:
File details
Details for the file is_crawler-1.1.3-py3-none-any.whl.
File metadata
- Download URL: is_crawler-1.1.3-py3-none-any.whl
- Upload date:
- Size: 27.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
46d883723eab4d79c122c5641d36fb41635caaa308fb12f303e0107052dac3ba
|
|
| MD5 |
951f4f7a7ba10a1d5683dcd192199a96
|
|
| BLAKE2b-256 |
58bf0545ae79098b3a22223692c168aa24b08fa232c9624126fc6a5a410b480d
|
Provenance
The following attestation bundles were made for is_crawler-1.1.3-py3-none-any.whl:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.1.3-py3-none-any.whl -
Subject digest:
46d883723eab4d79c122c5641d36fb41635caaa308fb12f303e0107052dac3ba - Sigstore transparency entry: 1293602249
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@97989c59ce5c1d64542c9d731ad36d9450b7692d -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@97989c59ce5c1d64542c9d731ad36d9450b7692d -
Trigger Event:
push
-
Statement type: