Skip to main content

Enter something here

Project description

SimpleCrawler

  • This web crawler can be used to crawl a website from the command line or code

Install

OR

  • git clone https://github.com/jackwardell/SimpleCrawler.git
  • cd SimpleCrawler
  • python3 -m venv venv
  • source venv/bin/activate
  • pip install --upgrade pip
  • pip install -r requirements.txt
  • pip install -e .
  • pytest
  • crawl https://www.example.com

Rules:

This crawler will:

  • Only crawl text/html mime-types
  • Only crawl pages that return 200 OK HTTP statuses
  • Look at /robots.txt and obey by default (but can be overridden)
  • Add User-Agent, default value = PyWebCrawler (but can be changed)
  • Ignore ?query=strings and #fragments by default (but can be changed)
  • Get links from ONLY href value in click here tags

Todo:

Use

  • just type crawl <url> into your command line e.g. crawl https://www.google.com
$ crawl --help
Usage: crawl [OPTIONS] URL

Options:
  -u, --user-agent TEXT
  -w, --max-workers INTEGER
  -t, --timeout INTEGER
  -h, --check-head
  -d, --disobey-robots
  -wq, --with-query
  -wf, --with-fragment
  --debug / --no-debug
  --help                     Show this message and exit.

OR from code

from simple_crawler import Crawler

crawler = Crawler()
found_links = crawler.crawl('https://www.example.com/')

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

SimpleCrawler-1.0.1.tar.gz (16.9 kB view hashes)

Uploaded Source

Built Distribution

SimpleCrawler-1.0.1-py3-none-any.whl (20.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page