Skip to main content

CrawlerDetect is a Python library designed to identify bots, crawlers, and spiders by analyzing their user agents.

Project description

About CrawlerDetect

This is a Python wrapper for CrawlerDetect - the web crawler detection library It helps to detect bots/crawlers/spiders via the user agent and other HTTP-headers. Currently able to detect > 1,000's of bots/spiders/crawlers.

Installation

Run pip install crawlerdetect

Usage

Variant 1

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected

Variant 2

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(user_agent='Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit (KHTML, like Gecko) Mobile (compatible; Yahoo Ad monitoring; https://help.yahoo.com/kb/yahoo-ad-monitoring-SLN24857.html)')
crawler_detect.isCrawler()
# true if crawler user agent detected

Variant 3

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect(headers={'DOCUMENT_ROOT': '/home/test/public_html', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CACHE_CONTROL': 'no-cache', 'HTTP_CONNECTION': 'Keep-Alive', 'HTTP_FROM': 'googlebot(at)googlebot.com', 'HTTP_HOST': 'www.test.com', 'HTTP_PRAGMA': 'no-cache', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36', 'PATH': '/bin:/usr/bin', 'QUERY_STRING': 'order=closingDate', 'REDIRECT_STATUS': '200', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '3360', 'REQUEST_METHOD': 'GET', 'REQUEST_URI': '/?test=testing', 'SCRIPT_FILENAME': '/home/test/public_html/index.php', 'SCRIPT_NAME': '/index.php', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': 'webmaster@test.com', 'SERVER_NAME': 'www.test.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache', 'UNIQUE_ID': 'Vx6MENRxerBUSDEQgFLAAAAAS', 'PHP_SELF': '/index.php', 'REQUEST_TIME_FLOAT': 1461619728.0705, 'REQUEST_TIME': 1461619728})
crawler_detect.isCrawler()
# true if crawler user agent detected

Output the name of the bot that matched (if any)

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.isCrawler('Mozilla/5.0 (compatible; Sosospider/2.0; +http://help.soso.com/webspider.htm)')
# true if crawler user agent detected
crawler_detect.getMatches()
# Sosospider

Get version of the library

from crawlerdetect import CrawlerDetect
crawler_detect = CrawlerDetect()
crawler_detect.version

Contributing

If you find a bot/spider/crawler user agent that CrawlerDetect fails to detect, please submit a pull request with the regex pattern added to the array in providers/crawlers.py and add the failing user agent to tests/crawlers.txt.

Failing that, just create an issue with the user agent you have found, and we'll take it from there :)

ES6 Library

To use this library with NodeJS or any ES6 application based, check out es6-crawler-detect.

.NET Library

To use this library in a .net standard (including .net core) based project, check out NetCrawlerDetect.

Nette Extension

To use this library with the Nette framework, checkout NetteCrawlerDetect.

Ruby Gem

To use this library with Ruby on Rails or any Ruby-based application, check out crawler_detect gem.

Parts of this class are based on the brilliant MobileDetect

Development

Setup

poetry install

Running tests

poetry run pytest

Bump version

poetry run bump-my-version bump [patch|minor|major]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawlerdetect-0.2.2.tar.gz (17.9 kB view details)

Uploaded Source

Built Distribution

crawlerdetect-0.2.2-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file crawlerdetect-0.2.2.tar.gz.

File metadata

  • Download URL: crawlerdetect-0.2.2.tar.gz
  • Upload date:
  • Size: 17.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.7 Darwin/24.0.0

File hashes

Hashes for crawlerdetect-0.2.2.tar.gz
Algorithm Hash digest
SHA256 08f11728dbdc409b1016f1420056d01cbdf2e3550caeceb5c16a7689bbb838e8
MD5 98d26224f0e7e7dc114096508f580da4
BLAKE2b-256 3a1f93ea73ae43f9cb01fbe61e7846f497278b385ceed8dba7828da434b5620a

See more details on using hashes here.

File details

Details for the file crawlerdetect-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: crawlerdetect-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 17.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.7 Darwin/24.0.0

File hashes

Hashes for crawlerdetect-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2cf221174e1316428e0023fb41e7e007f1198f18aecbf8899eeff2ebe5abd1ff
MD5 39a506b870e9ce735eebfe3bcb5541e1
BLAKE2b-256 4a6506a3fb91ebe06c5e316550da0b9777939cd53a2160c24df63ab2f874f465

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page