Skip to main content

Tiny, zero-dependency crawler detection via regex.

Project description

is-crawler

Tiny, zero-dependency Python library that detects bots and crawlers from user-agent strings. Fast, lightweight, and ready to drop into any web app or API.

Docs & live demo: is-crawler.tn3w.dev

Install

pip install is-crawler

For faster regex matching, optionally install google-re2. It will be used automatically when available:

pip install is-crawler google-re2

Usage

from is_crawler import crawler_name, crawler_version, is_crawler

is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True
is_crawler("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")  # False

crawler_name("Googlebot/2.1 (+http://www.google.com/bot.html)")  # "Googlebot"
crawler_version("Googlebot/2.1 (+http://www.google.com/bot.html)")  # "2.1"
crawler_name("NewsBlur Feed Fetcher - 1 subscriber - http://www.newsblur.com/site/0000000/webpage (Mozilla/5.0 ...)")  # "NewsBlur Feed Fetcher"

The module itself is also callable, so you can skip the named import:

import is_crawler

is_crawler("Googlebot/2.1 (+http://www.google.com/bot.html)")  # True

To see which rules matched, use crawler_signals:

from is_crawler import crawler_signals

crawler_signals("Googlebot/2.1 (+http://www.google.com/bot.html)")
# ['bot_signal', 'no_browser_signature']

crawler_signals("Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0 Safari/537.36")
# []

Possible signal names: bot_signal, no_browser_signature, bare_compatible, known_tool.

If you also want the crawler product name, use crawler_name:

from is_crawler import crawler_name

crawler_name("Mozilla/5.0 (compatible; BitSightBot/1.0)")  # "BitSightBot"
crawler_name("Mozilla/5.0 (...) PingdomPageSpeed/1.0 (pingbot/2.0; +http://www.pingdom.com/)")  # "PingdomPageSpeed"

To get just the crawler version in the shortest possible form, use crawler_version:

from is_crawler import crawler_version

crawler_version("curl/7.64.1")  # "7.64.1"
crawler_version("Mozilla/5.0 (compatible; AndersPinkBot/1.0; +http://anderspink.com/bot.html)")  # "1.0"
crawler_version("Mozilla/5.0 (...) Bytespider")  # None

Works great as middleware, rate-limiter input, or analytics filter:

from is_crawler import is_crawler

@app.before_request
def block_bots():
    if is_crawler(request.headers.get("User-Agent", "")):
        abort(403)

How it works

Four fast regex checks, no database or external lookups:

  1. Bot signals -- common keywords (bot, crawl, spider, scrape, ...), URL/email patterns, headless
  2. Missing browser signature -- real browsers always include engine tokens like WebKit, Gecko, or Trident
  3. Bare (compatible; ...) block -- classic bot pattern without OS tokens
  4. Known tools -- playwright, selenium, wget, lighthouse, sqlmap, and more

Need more?

If you need deeper user-agent analysis -- device type, OS, browser version, or full bot fingerprinting -- check out cr-ua.

Formatting

pip install black isort
isort . && black .
npx prtfm

License

Apache-2.0

Project details


Release history Release notifications | RSS feed

This version

1.0.5

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

is_crawler-1.0.5.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

is_crawler-1.0.5-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file is_crawler-1.0.5.tar.gz.

File metadata

  • Download URL: is_crawler-1.0.5.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.0.5.tar.gz
Algorithm Hash digest
SHA256 68e90ebab7a77f4b01d7df7fef9436f719cea32ec0b3d3c2e6a56d9759f1a673
MD5 8366751eec52cc8843f5a3938a3d027b
BLAKE2b-256 0c1de5bfb30c3408d976a49f401f828787ade133ae636bade92384aebf1742dd

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.0.5.tar.gz:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file is_crawler-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: is_crawler-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 8.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for is_crawler-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 9d8b149ade755c442c4536c0dd6be34cc9e8195fc0033d8c5929e1d896fa65f5
MD5 596dc8bd7fb3e79e0b6fc9b14feb53d6
BLAKE2b-256 e053c9571e4e70dfc07e725d0269e4ca4487b8e4e4e7f70b4735e4f889bc4475

See more details on using hashes here.

Provenance

The following attestation bundles were made for is_crawler-1.0.5-py3-none-any.whl:

Publisher: publish.yml on tn3w/is-crawler

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page