Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (pypi.python.org).
Help us improve Python packaging - Donate today!

A library for crawling websites

Project Description

http-crawler is a library for crawling websites. It uses requests to speak HTTP.

Installation

Install with pip:

$ pip install http-crawler

Usage

The http_crawler module provides one generator function, crawl.

crawl is called with a URL, and yields instances of requests’s Response class.

crawl will request the page at the given URL, and will extract all URLs from the response. It will then make a request for each of those URLs, and will repeat the process until it has requested every URL linked to from pages on the original URL’s domain. It will not extract or process URLs from any page with a different domain to the original URL.

For instance, this is how you would use crawl to find and log any broken links on a site:

>>> from http_crawler import crawl
>>> for rsp in crawl('http://www.example.com'):
>>>     if rsp.status_code != 200:
>>>         print('Got {} at {}'.format(rsp.status_code, rsp.url))

crawl has a number of options:

  • follow_external_links (default True) If set, crawl will make a request for every URL it encounters, including ones with a different domain to the original URL. If not set, crawl will ignore all URLs that have a different domain to the original URL. In either case, crawl will not extract further URLs from a page with a different domain to the original URL.
  • ignore_fragments (default True) If set, crawl will ignore the fragment part of any URL. This means that if crawl encounters http://domain/path#anchor, it will make a request for http://domain/path. Moreover, it means that if crawl encounters http://domain/path#anchor1 and http://domain/path#anchor2, it will only make one request.

Motivation

Why another crawling library? There are certainly lots of Python tools for crawling websites, but all that I could find were either too complex, too simple, or had too many dependencies.

http-crawler is designed to be a library and not a framework, so it should be straightforward to use in applications or other libraries.

Contributing

There are a handful of enhancements on the issue tracker that would be suitable for somebody looking to contribute to Open Source for the first time.

For instructions about making Pull Requests, see GitHub’s guide.

All contributions should include tests with 100% code coverage, and should comply with PEP 8. The project uses tox for running tests and checking code quality metrics.

To run the tests:

$ tox
Release History

Release History

This version
History Node

0.1.6

History Node

0.1.5

History Node

0.1.4

History Node

0.1.2

History Node

0.1.1

History Node

0.1.0

History Node

0.0.1

Download Files

Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

File Name & Checksum SHA256 Checksum Help Version File Type Upload Date
http-crawler-0.1.6.tar.gz (3.6 kB) Copy SHA256 Checksum SHA256 Source Aug 19, 2017

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting