Tiny, zero-dependency crawler detection via regex.
Project description
is-crawler
Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.
Docs & live demo: is-crawler.tn3w.dev
Install
pip install is-crawler
For faster regex matching, optionally install google-re2. It will be used automatically when available:
pip install is-crawler google-re2
Usage
from is_crawler import crawler_name, is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)") # True
is_crawler("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36") # False
crawler_name("Googlebot/2.1 (+http://www.google.com/bot.html)") # "Googlebot"
crawler_name("NewsBlur Feed Fetcher - 1 subscriber - http://www.newsblur.com/site/0000000/webpage (Mozilla/5.0 ...)") # "NewsBlur Feed Fetcher"
The module itself is also callable, so you can skip the named import:
import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)") # True
To see which rules matched, use crawler_signals:
from is_crawler import crawler_signals
crawler_signals("Googlebot/2.1 (+http://www.google.com/bot.html)")
# ['bot_signal', 'no_browser_signature']
crawler_signals("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")
# []
Possible signal names: bot_signal, no_browser_signature, bare_compatible, known_tool.
If you also want the crawler product name, use crawler_name:
from is_crawler import crawler_name
crawler_name("Mozilla/5.0 (compatible; BitSightBot/1.0)") # "BitSightBot"
crawler_name("Mozilla/5.0 (...) PingdomPageSpeed/1.0 (pingbot/2.0; +http://www.pingdom.com/)") # "PingdomPageSpeed"
Works great as middleware, rate-limiter input, or analytics filter:
from is_crawler import is_crawler
@app.before_request
def block_bots():
if is_crawler(request.headers.get("User-Agent", "")):
abort(403)
How it works
Four fast regex checks, no database or external lookups:
- Bot signals -- common keywords (
bot,crawl,spider,scrape, ...), URL/email patterns,headless - Missing browser signature -- real browsers always include engine tokens like
WebKit,Gecko, orTrident - Bare
(compatible; ...)block -- classic bot pattern without OS tokens - Known tools --
playwright,selenium,wget,lighthouse,sqlmap, and more
Need more?
If you need deeper user-agent analysis -- device type, OS, browser version, or full bot fingerprinting -- check out cr-ua.
Formatting
pip install black isort
isort . && black .
npx prtfm
License
Apache-2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file is_crawler-1.0.4.tar.gz.
File metadata
- Download URL: is_crawler-1.0.4.tar.gz
- Upload date:
- Size: 9.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
430b578fc7bd36eb7b0d17d8b92decdc37c8868049bb070f291929b23308ebbb
|
|
| MD5 |
e3df6245132ecfec5ab6ba76e8bf72c9
|
|
| BLAKE2b-256 |
58648f4d4abfc04e6b15c302033d3572ba6e1bcc99312b8e94d8889b4f71e148
|
Provenance
The following attestation bundles were made for is_crawler-1.0.4.tar.gz:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.0.4.tar.gz -
Subject digest:
430b578fc7bd36eb7b0d17d8b92decdc37c8868049bb070f291929b23308ebbb - Sigstore transparency entry: 1262585401
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@ddb120e54ff606c96739dca9c65977f45c7339a9 -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ddb120e54ff606c96739dca9c65977f45c7339a9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file is_crawler-1.0.4-py3-none-any.whl.
File metadata
- Download URL: is_crawler-1.0.4-py3-none-any.whl
- Upload date:
- Size: 8.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c617c442041a6430b3ba6eb0dc99ffd5a14d478531d15ff38137f65f35d08076
|
|
| MD5 |
303449684a6f436c0403c5ad39870598
|
|
| BLAKE2b-256 |
a6e93e7022f7a7191658932fa52411df89225ac046286c1eaed2293f3578eea6
|
Provenance
The following attestation bundles were made for is_crawler-1.0.4-py3-none-any.whl:
Publisher:
publish.yml on tn3w/is-crawler
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
is_crawler-1.0.4-py3-none-any.whl -
Subject digest:
c617c442041a6430b3ba6eb0dc99ffd5a14d478531d15ff38137f65f35d08076 - Sigstore transparency entry: 1262585447
- Sigstore integration time:
-
Permalink:
tn3w/is-crawler@ddb120e54ff606c96739dca9c65977f45c7339a9 -
Branch / Tag:
refs/heads/master - Owner: https://github.com/tn3w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ddb120e54ff606c96739dca9c65977f45c7339a9 -
Trigger Event:
push
-
Statement type: