Skip to main content

Tiny, zero-dependency crawler detection via regex.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

PyPI Python License Issues Stars Downloads

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

Usage

from is_crawler import is_crawler, crawler_info, crawler_has_tag

ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"

is_crawler(ua)                              # True
crawler_has_tag(ua, "search-engine")        # True
crawler_has_tag(ua, ["ai-crawler", "seo"])  # False

info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
#             description="Google's main web crawling bot for search indexing",
#             tags=['search-engine'])
info.url          # 'http://www.google.com/bot.html'
info.description  # "Google's main web crawling bot for search indexing"
info.tags         # ['search-engine']

The module is also callable directly, no named import required:

import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True

API reference

Function Returns Description
is_crawler(ua) bool Heuristic detection: fast, no DB
crawler_signals(ua) list[str] Which heuristic rules matched
crawler_info(ua) CrawlerInfo | None url, description, tags: DB lookup for 646 known crawlers
crawler_has_tag(ua, tag) bool tag can be str or list[str] (matches any)
crawler_name(ua) str | None Product name extracted from the UA string
crawler_version(ua) str | None Version extracted from the UA string
crawler_url(ua) str | None URL embedded in the UA string

crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.

crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.

Middleware example

from is_crawler import is_crawler, crawler_has_tag

@app.before_request
def gate():
    ua = request.headers.get("User-Agent", "")
    if crawler_has_tag(ua, "ai-crawler"):
        abort(403)        # block AI scrapers
    if is_crawler(ua):
        log_crawler(ua)   # track other bots without blocking

How it works

is_crawler / crawler_signals: five heuristic regex checks, no lookup:

  1. Bot signals: common keywords (bot, crawl, spider, scrape, ...), URL/email patterns, headless
  2. Missing browser signature: real browsers always include engine tokens like WebKit, Gecko, or Trident
  3. Bare (compatible; ...) block: classic bot pattern without OS tokens
  4. Known tools: playwright, selenium, wget, lighthouse, sqlmap, and more
  5. URL in UA: an embedded http:// or https:// URL, a near-universal bot convention

crawler_info / crawler_has_tag: pattern database loaded lazily on first call, built from monperrus/crawler-user-agents with supplemental tags. Returns url, description, and tags for 646 known crawlers.

Benchmarks

Measured on Python 3.14, Linux x86_64. Fixture corpus: 1 231 crawler UAs and 15 812 browser UAs. crawleruseragents is the crawler-user-agents PyPI package (v1.42, no caching).

is_crawler: heuristic detection (no DB)

Corpus is_crawler cua.is_crawler speedup
crawlers only 0.23 µs 62.8 µs 273×
browsers only 9.3 µs 173.0 µs 18×
mixed 8.9 µs 165.1 µs 18×

Purely heuristic, no DB. Crawler UAs are fast because bot_signal triggers immediately; browser UAs exhaust all five checks.

crawler_info / crawler_has_tag: DB pattern lookup

is_crawler cua equivalent speedup
crawler_info 31.2 µs 906 µs 29×
crawler_has_tag 31.3 µs

crawler_has_tag delegates to crawler_info (cached); cost is independent of tag cardinality.

Cold-start (JSON parse + 646 re.compile calls)

is_crawler crawleruseragents
time 13.7 ms 1.1 ms

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.2.0.tar.gz (31.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.2.0-py3-none-any.whl (26.9 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.2.0.tar.gz.

File metadata

  • Download URL: is_crawler-1.2.0.tar.gz
  • Upload date:
  • Size: 31.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.2.0.tar.gz
Algorithm Hash digest
SHA256 b7114d63bba73db0ab201669ea9ccd13d7e7bc73240c9b0c0485f637cc067bb1
MD5 41a00e4c4f48bdde9c9ae5cc0b2545b2
BLAKE2b-256 e062044c038cf5f254f6c6665cbf3479ecc2b3eb87331058c1a1be05b2c84141

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.2.0.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 26.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7722cd4055fbbf0852b5b2cea4003bd754f37c7e958f395da8e4c6c4853ee256
MD5 2e10775ecf8227b92697a6c094540cf3
BLAKE2b-256 578a635035aa39060d538fe842b0e5d7f8ce352172a789b9c24facafa69d3adc

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.2.0-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page