Skip to main content

Tiny, zero-dependency crawler detection from user agents.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

PyPI Python License Issues Stars Downloads

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

Usage

from is_crawler import (
    is_crawler, crawler_signals, crawler_info,
    crawler_has_tag, crawler_name, crawler_version, crawler_url,
)

ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"

is_crawler(ua)                              # True

crawler_signals(ua)
# ['bot_signal', 'url_in_ua']

info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
#             description="Google's main web crawling bot...",
#             tags=('search-engine',))
info.url          # 'http://www.google.com/bot.html'
info.description  # "Google's main web crawling bot for search indexing"
info.tags         # ('search-engine',)

crawler_has_tag(ua, "search-engine")        # True
crawler_has_tag(ua, ["ai-crawler", "seo"])  # False

crawler_name(ua)     # 'Googlebot'
crawler_version(ua)  # '2.1'
crawler_url(ua)      # 'http://www.google.com/bot.html'

API reference

Function Returns Description
is_crawler(ua) bool Heuristic detection: fast, no DB
crawler_signals(ua) list[str] Which heuristic rules matched
crawler_info(ua) CrawlerInfo | None url, description, tags: DB lookup for 646 known crawlers
crawler_has_tag(ua, tag) bool tag can be str or list[str] (matches any)
crawler_name(ua) str | None Product name extracted from the UA string
crawler_version(ua) str | None Version extracted from the UA string
crawler_url(ua) str | None URL embedded in the UA string

crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.

crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.

Middleware example

from is_crawler import is_crawler, crawler_has_tag

@app.before_request
def gate():
    ua = request.headers.get("User-Agent", "")
    if crawler_has_tag(ua, "ai-crawler"):
        abort(403)        # block AI scrapers
    if is_crawler(ua):
        log_crawler(ua)   # track other bots without blocking

How it works

is_crawler: three-step short-circuit, no DB lookup.

  1. Positive signal: one fused regex combining bot keywords (bot, crawl, spider, scrape, headless, ...), known tools (playwright, selenium, wget, lighthouse, sqlmap, ...), and URL-in-UA patterns. One hit → crawler.
  2. No browser signature: missing WebKit/Gecko/Trident/etc. → crawler.
  3. Bare (compatible; ...): classic bot block without OS tokens → crawler.

crawler_signals: exposes which of the five individual checks fired (bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua). Useful for diagnostics; is_crawler does not call it.

crawler_info / crawler_has_tag: gated by is_crawler so browser UAs skip the DB entirely. On a crawler hit, patterns (from monperrus/crawler-user-agents plus supplemental tags) are sharded into 48-entry chunks; each chunk's combined filter and its per-pattern regexes compile lazily on first match. Returns url, description, and tags.

Caching: 32k-entry LRU cache on every public function. Repeat UAs hit in ~40 ns.

Regex-free alternative

is_crawler.no_regex, same API, implemented with str.find + char checks. Zero re imports. Useful when embedding in sandboxes that restrict re, when auditing for ReDoS, or when regex engine startup is a concern.

from is_crawler.no_regex import (
    is_crawler, crawler_signals,
    crawler_name, crawler_version, crawler_url,
)

is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True
crawler_name("Googlebot/2.1 (+http://www.google.com/bot.html)")  # 'Googlebot'

API is a strict subset of the regex version (no crawler_info/crawler_has_tag those require DB pattern matching). All five functions carry the same 32k-entry LRU cache. Verified against the full fixture corpus (17,043 UAs, 0 mismatches).

no_regex benchmark

Measured against the regex version (Python 3.14, Linux x86_64):

regex (cold) no_regex (cold) speedup
is_crawler 18.7 µs 4.9 µs 3.8×
crawler_url 1.8 µs 0.3 µs 5.8×
crawler_version 2.2 µs 1.5 µs 1.5×
crawler_name 1.7 µs 1.5 µs 1.1×

Cold = cache cleared each iter. crawler_name is slower because the regex version leverages compiled re.sub calls (C-level) to strip comments and browser tokens; char-by-char Python can't match that. The other three win on pure str.find being faster than regex compilation + backtracking. Reproduce with python benchmarks/bench_no_regex.py.

Benchmarks

Measured on Python 3.14, Linux x86_64. Fixture corpus: 1 231 crawler UAs and 15 812 browser UAs. cua is the crawler-user-agents PyPI package (v1.42, no caching).

is_crawler: heuristic detection (no DB)

Corpus is_crawler cua.is_crawler speedup
crawlers only 0.39 µs 61.7 µs 158×
browsers only 3.76 µs 182.7 µs 49×
mixed 0.04 µs 167.7 µs 4000×

Crawler UAs hit a single combined positive regex; browser UAs fall through to a browser-signature check. A 32k-entry LRU cache drives mixed-corpus calls to near-zero amortized cost.

crawler_info / crawler_has_tag: DB pattern lookup

is_crawler cua equivalent speedup
crawler_info 0.53 µs 743.0 µs 1400×
crawler_has_tag 0.14 µs - -

Browser UAs short-circuit via is_crawler before touching the DB. Matching walks 48-entry combined chunks to locate the winning pattern in ~1/25 of the full-scan work. crawler_has_tag delegates to cached crawler_info; cost is independent of tag cardinality.

Cold-start

Module Cold-start Notes
is_crawler 0.55 ms JSON parse; regexes stay lazy
crawleruseragents 0.89 ms JSON parse

DB patterns compile lazily per 48-entry chunk on first match, import and _ensure_db stay cheap.

Single-UA uncached benchmark

is_crawler cua.is_crawler speedup
Googlebot uncached 1.710 µs 70.419 µs 41×

Direct apples-to-apples check on one crawler UA. Same Googlebot/2.1 (+http://www.google.com/bot.html) over 1,000 runs. is_crawler clears caches before each call; cua reloads crawleruseragents before each call.

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.3.7.tar.gz (38.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.3.7-py3-none-any.whl (31.0 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.3.7.tar.gz.

File metadata

  • Download URL: is_crawler-1.3.7.tar.gz
  • Upload date:
  • Size: 38.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.3.7.tar.gz
Algorithm Hash digest
SHA256 f3e33478ddd5c9f0df7993825cda4a76084d5a11641b55cf9e28b321ed7bd293
MD5 46235ebcff20523e082bc00d4c137b17
BLAKE2b-256 7768e80db22bb513cc1ae4d494ca5cddda0d1ba3f266c519d839a7ab539ca849

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.3.7.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.3.7-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.3.7-py3-none-any.whl
  • Upload date:
  • Size: 31.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.3.7-py3-none-any.whl
Algorithm Hash digest
SHA256 92e772e437140b2d04c04cf165070087154ad85989f7f0f082916435a16fef5d
MD5 f4524971d24a80688c0beaa239538e99
BLAKE2b-256 1cabebf0e69cb580220b4e9800e69b782a37c9131acc8a393b07f7f95dd19fe6

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.3.7-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page