Skip to main content

Tiny, zero-dependency crawler detection via regex.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

PyPI Python License Issues Stars Downloads

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

For faster regex matching, optionally install google-re2. It will be used automatically when available:

pip install is-crawler google-re2

Usage

from is_crawler import is_crawler, crawler_info, crawler_has_tag

ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"

is_crawler(ua)                              # True
crawler_has_tag(ua, "search-engine")        # True
crawler_has_tag(ua, ["ai-crawler", "seo"])  # False

info = crawler_info(ua)
# CrawlerInfo(url='http://www.google.com/bot.html',
#             description="Google's main web crawling bot for search indexing",
#             tags=['search-engine'])
info.url          # 'http://www.google.com/bot.html'
info.description  # "Google's main web crawling bot for search indexing"
info.tags         # ['search-engine']

The module is also callable directly, no named import required:

import is_crawler
is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True

API reference

Function Returns Description
is_crawler(ua) bool Heuristic detection: fast, no DB
crawler_signals(ua) list[str] Which heuristic rules matched
crawler_info(ua) CrawlerInfo | None url, description, tags: DB lookup for 646 known crawlers
crawler_has_tag(ua, tag) bool tag can be str or list[str] (matches any)
crawler_name(ua) str | None Product name extracted from the UA string
crawler_version(ua) str | None Version extracted from the UA string
crawler_url(ua) str | None URL embedded in the UA string

crawler_signals returns a subset of: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.

crawler_info tags: search-engine, ai-crawler, seo, social-preview, advertising, archiver, feed-reader, monitoring, scanner, academic, http-library, browser-automation.

Middleware example

from is_crawler import is_crawler, crawler_has_tag

@app.before_request
def gate():
    ua = request.headers.get("User-Agent", "")
    if crawler_has_tag(ua, "ai-crawler"):
        abort(403)        # block AI scrapers
    if is_crawler(ua):
        log_crawler(ua)   # track other bots without blocking

How it works

is_crawler / crawler_signals: five heuristic regex checks, no lookup:

  1. Bot signals: common keywords (bot, crawl, spider, scrape, ...), URL/email patterns, headless
  2. Missing browser signature: real browsers always include engine tokens like WebKit, Gecko, or Trident
  3. Bare (compatible; ...) block: classic bot pattern without OS tokens
  4. Known tools: playwright, selenium, wget, lighthouse, sqlmap, and more
  5. URL in UA: an embedded http:// or https:// URL, a near-universal bot convention

crawler_info / crawler_has_tag: pattern database loaded lazily on first call, built from monperrus/crawler-user-agents with supplemental tags. Returns url, description, and tags for 646 known crawlers.

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.1.1.tar.gz (30.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.1.1-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.1.1.tar.gz.

File metadata

  • Download URL: is_crawler-1.1.1.tar.gz
  • Upload date:
  • Size: 30.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.1.1.tar.gz
Algorithm Hash digest
SHA256 eb2bd139c5d04d032c797ae51634a98a2d3044eec0f816e661379e29e35144c4
MD5 1c0cbc2bf41af67f99d9e384184eb074
BLAKE2b-256 7ec5c2bef5aa9fb2c9a9e887ca736d7953022943db3303bc55ac2b33256d24a0

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.1.1.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 26.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3b07896ee72c8241b21b40bd66d4617cd18506f5443e086c30403f2e3a27e5af
MD5 4f6dd815871154a77a8628ed5c4b972f
BLAKE2b-256 836c7a0391490ce93938caadd7af99c7e9f847b43cd3385b492125fd5c77c715

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.1.1-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page