Skip to main content

CrawlerDetect is a Python library designed to identify bots, crawlers, and spiders by analyzing their user agents.

Project description

About CrawlerDetect

This is a Python wrapper for CrawlerDetect a web crawler detection library. It helps identify bots, crawlers, and spiders using the user agent and other HTTP headers. Currently, it can detect over 3,678 bots, spiders, and crawlers.

How to install

$ pip install crawlerdetect

How to use

Variant 1

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected

Variant 2

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')
crawler_detect.isCrawler()
# true if crawler user agent detected

Variant 3

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})
crawler_detect.isCrawler()
# true if crawler user agent detected

Output the name of the bot that matched (if any)

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
crawler_detect.getMatches()
# Sosospider

Get version of the library

import crawlerdetect
crawlerdetect.__version__

Contributing

The patterns and testcases are synced from the PHP repo. If you find a bot/spider/crawler user agent that crawlerdetect fails to detect, please submit a pull request with the regex pattern and a testcase to the upstream PHP repo.

Failing that, just create an issue with the user agent you have found, and we'll take it from there :)

Development

Setup

$ poetry install

Running tests

$ poetry run pytest

Update crawlers from upstream PHP repo

$ ./update_data.sh

Bump version

$ poetry run bump-my-version bump [patch|minor|major]

Laravel Package

If you would like to use this with Laravel, please see Laravel-Crawler-Detect

Symfony Bundle

To use this library with Symfony 2/3/4, check out the CrawlerDetectBundle.

YII2 Extension

To use this library with the YII2 framework, check out yii2-crawler-detect.

ES6 Library

To use this library with NodeJS or any ES6 application based, check out es6-crawler-detect.

JVM Library (written in Java)

To use this library in a JVM project (including Java, Scala, Kotlin, etc.), check out CrawlerDetect.

.NET Library

To use this library in a .net standard (including .net core) based project, check out NetCrawlerDetect.

Ruby Gem

To use this library with Ruby on Rails or any Ruby-based application, check out crawler_detect gem.

Go Module

To use this library with Go, check out the crawlerdetect module.

Parts of this class are based on the brilliant MobileDetect

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawlerdetect-0.2.6.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

crawlerdetect-0.2.6-py3-none-any.whl (16.5 kB view details)

Uploaded Python 3

File details

Details for the file crawlerdetect-0.2.6.tar.gz.

File metadata

  • Download URL: crawlerdetect-0.2.6.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.7 Darwin/24.0.0

File hashes

Hashes for crawlerdetect-0.2.6.tar.gz
Algorithm Hash digest
SHA256 7486e5b4b28694cd6fe9d55bfb74d36d7b3875e0e3ec4829c3f2a92c36abac8e
MD5 d33ff33ebbb927f79cf064e3d336ccdd
BLAKE2b-256 87cc6d8255aae5e2f9291075dba9ed60ec62f3a188693106da0baea1e7e1903d

See more details on using hashes here.

File details

Details for the file crawlerdetect-0.2.6-py3-none-any.whl.

File metadata

  • Download URL: crawlerdetect-0.2.6-py3-none-any.whl
  • Upload date:
  • Size: 16.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.7 Darwin/24.0.0

File hashes

Hashes for crawlerdetect-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 d0cf9fc150772d102e0d645cf932f61dec8ba0982b9d32fccf6298a2ce9eaf9a
MD5 2807d6317b945fa29d42366c3d10e222
BLAKE2b-256 4c7672bbaa7ab4f7043eded8245b0410981fa0569d737bfc9054b4071fa8c87a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page