Skip to main content

Tiny, zero-dependency crawler detection via regex.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

For faster regex matching, optionally install google-re2. It will be used automatically when available:

pip install is-crawler google-re2

Usage

from is_crawler import crawler_name, crawler_version, is_crawler

is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True
is_crawler("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")  # False

crawler_name("Googlebot/2.1 (+http://www.google.com/bot.html)")  # "Googlebot"
crawler_version("Googlebot/2.1 (+http://www.google.com/bot.html)")  # "2.1"
crawler_name("NewsBlur Feed Fetcher - 1 subscriber - http://www.newsblur.com/site/0000000/webpage (Mozilla/5.0 ...)")  # "NewsBlur Feed Fetcher"

The module itself is also callable, so you can skip the named import:

import is_crawler

is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True

To see which rules matched, use crawler_signals:

from is_crawler import crawler_signals

crawler_signals("Googlebot/2.1 (+http://www.google.com/bot.html)")
# ['bot_signal', 'no_browser_signature']

crawler_signals("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")
# []

Possible signal names: bot_signal, no_browser_signature, bare_compatible, known_tool, url_in_ua.

If you also want the crawler product name, use crawler_name:

from is_crawler import crawler_name

crawler_name("Mozilla/5.0 (compatible; BitSightBot/1.0)")  # "BitSightBot"
crawler_name("Mozilla/5.0 (...) PingdomPageSpeed/1.0 (pingbot/2.0; +http://www.pingdom.com/)")  # "PingdomPageSpeed"

To get just the crawler version in the shortest possible form, use crawler_version:

from is_crawler import crawler_version

crawler_version("curl/7.64.1")  # "7.64.1"
crawler_version("Mozilla/5.0 (compatible; AndersPinkBot/1.0; +http://anderspink.com/bot.html)")  # "1.0"
crawler_version("Mozilla/5.0 (...) Bytespider")  # None

To extract a URL embedded in the user-agent string, use crawler_url:

from is_crawler import crawler_url

crawler_url("Googlebot/2.1 (+http://www.google.com/bot.html)")  # "http://www.google.com/bot.html"
crawler_url("Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)")  # "http://www.bing.com/bingbot.htm"
crawler_url("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")  # None

Works great as middleware, rate-limiter input, or analytics filter:

from is_crawler import is_crawler

@app.before_request
def block_bots():
    if is_crawler(request.headers.get("User-Agent", "")):
        abort(403)

How it works

Five fast regex checks, no database or external lookups:

  1. Bot signals -- common keywords (bot, crawl, spider, scrape, ...), URL/email patterns, headless
  2. Missing browser signature -- real browsers always include engine tokens like WebKit, Gecko, or Trident
  3. Bare (compatible; ...) block -- classic bot pattern without OS tokens
  4. Known tools -- playwright, selenium, wget, lighthouse, sqlmap, and more
  5. URL in UA -- an embedded http:// or https:// URL, a near-universal bot convention

Need more?

If you need deeper user-agent analysis -- device type, OS, browser version, or full bot fingerprinting -- check out cr-ua.

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

This version

1.0.6

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.0.6.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.0.6-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.0.6.tar.gz.

File metadata

  • Download URL: is_crawler-1.0.6.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.0.6.tar.gz
Algorithm Hash digest
SHA256 dc1c35649dedb3f3a52eebb7dfd7d27e3734290e1900d856aea511f3e9e05bab
MD5 6e0786e8fef458586e312c989249d087
BLAKE2b-256 d9983b2b2037d08b34d6b971b544e753359ab824e1947dfb92cd101fcc328e8d

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.0.6.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 7f052833c7a172129c5ab0e6ce4ddb5ad395737534b6cf0f3cbd08e72d85a6c1
MD5 0387e44c17e78e96acc0132dda414a71
BLAKE2b-256 2fdd3cfc1a2c806906be6678e8c7243c95b43b9ba9d86bac7ae9509b3b669b3e

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.0.6-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page