Skip to main content

Tiny, zero-dependency crawler detection via regex.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

PyPI Python License Issues Stars Downloads

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

Usage

from is_crawler import is_crawler, crawler_info, crawler_has_tag

ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"

is_crawler(ua)                              # True
crawler_has_tag(ua, "search-engine")        # True
crawler_has_tag(ua, ["ai-crawler", "seo"])  # False

info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
#             description="Google's main web crawling bot...",
#             tags=('search-engine',))
info.url          # 'http://www.google.com/bot.html'
info.description  # "Google's main web crawling bot for search indexing"
info.tags         # ('search-engine',)

The module is also callable directly, no named import required:

import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True

API reference

Function Returns Description
is_crawler(ua) bool Heuristic detection: fast, no DB
crawler_signals(ua) list[str] Which heuristic rules matched
crawler_info(ua) CrawlerInfo | None url, description, tags: DB lookup for 646 known crawlers
crawler_has_tag(ua, tag) bool tag can be str or list[str] (matches any)
crawler_name(ua) str | None Product name extracted from the UA string
crawler_version(ua) str | None Version extracted from the UA string
crawler_url(ua) str | None URL embedded in the UA string

crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.

crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.

Middleware example

from is_crawler import is_crawler, crawler_has_tag

@app.before_request
def gate():
    ua = request.headers.get("User-Agent", "")
    if crawler_has_tag(ua, "ai-crawler"):
        abort(403)        # block AI scrapers
    if is_crawler(ua):
        log_crawler(ua)   # track other bots without blocking

How it works

is_crawler: three-step short-circuit, no DB lookup.

  1. Positive signal: one fused regex combining bot keywords (bot, crawl, spider, scrape, headless, ...), known tools (playwright, selenium, wget, lighthouse, sqlmap, ...), and URL-in-UA patterns. One hit → crawler.
  2. No browser signature: missing WebKit/Gecko/Trident/etc. → crawler.
  3. Bare (compatible; ...): classic bot block without OS tokens → crawler.

crawler_signals: exposes which of the five individual checks fired (bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua). Useful for diagnostics; is_crawler does not call it.

crawler_info / crawler_has_tag: gated by is_crawler so browser UAs skip the DB entirely. On a crawler hit, patterns (from monperrus/crawler-user-agents plus supplemental tags) are sharded into 48-entry chunks; each chunk's combined filter and its per-pattern regexes compile lazily on first match. Returns url, description, and tags.

Caching: 32k-entry LRU cache on every public function. Repeat UAs hit in ~40 ns.

Benchmarks

Measured on Python 3.14, Linux x86_64. Fixture corpus: 1 231 crawler UAs and 15 812 browser UAs. cua is the crawler-user-agents PyPI package (v1.42, no caching).

is_crawler: heuristic detection (no DB)

Corpus is_crawler cua.is_crawler speedup
crawlers only 0.39 µs 61.7 µs 158×
browsers only 3.76 µs 182.7 µs 49×
mixed 0.04 µs 167.7 µs 4000×

Crawler UAs hit a single combined positive regex; browser UAs fall through to a browser-signature check. A 32k-entry LRU cache drives mixed-corpus calls to near-zero amortized cost.

crawler_info / crawler_has_tag: DB pattern lookup

is_crawler cua equivalent speedup
crawler_info 0.53 µs 743.0 µs 1400×
crawler_has_tag 0.14 µs - -

Browser UAs short-circuit via is_crawler before touching the DB. Matching walks 48-entry combined chunks to locate the winning pattern in ~1/25 of the full-scan work. crawler_has_tag delegates to cached crawler_info; cost is independent of tag cardinality.

Cold-start

Module Cold-start Notes
is_crawler 0.55 ms JSON parse; regexes stay lazy
crawleruseragents 0.89 ms JSON parse

DB patterns compile lazily per 48-entry chunk on first match, import and _ensure_db stay cheap.

Single-UA uncached benchmark

is_crawler cua equivalent speedup
Googlebot uncached 1.710 µs 70.419 µs 41×

Direct apples-to-apples check on one crawler UA. Same Googlebot/2.1 (+http://www.google.com/bot.html) over 1,000 runs. is_crawler clears caches before each call; cua reloads crawleruseragents before each call.

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.3.2.tar.gz (32.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.3.2-py3-none-any.whl (27.5 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.3.2.tar.gz.

File metadata

  • Download URL: is_crawler-1.3.2.tar.gz
  • Upload date:
  • Size: 32.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.3.2.tar.gz
Algorithm Hash digest
SHA256 bce429cba2dce9adf8b934a037d6db40396965a4e398dd4dafa3012f698f3c8d
MD5 40d97fc34500fcf9efb44ea31555c7c1
BLAKE2b-256 6ec534a3e7b7071b970e19806e31a58b4876e27e3a8dfa858f4da349f397ce25

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.3.2.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.3.2-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.3.2-py3-none-any.whl
  • Upload date:
  • Size: 27.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d8cf76f242ab6470d5b71edd7070c8ea20411d489de14d33e56d8e38a3c809f5
MD5 6ce1007324221750298b4d2628d5b709
BLAKE2b-256 aeb07db8ec6659e6ec397539028a3469768802ae4e0700cb57469089d906da75

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.3.2-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page