Skip to main content

Scrapy middleware with wayback machine support for more robust scrapers.

Project description

scrapy-wayback

PyPi

Scrapy middleware with wayback machine support for more robust scrapers.

Dependencies :globe_with_meridians:

Installation :inbox_tray:

This is a python package hosted on pypi, so to install simply run the following command:

pip install scrapy-wayback

Settings

WAYBACK_MACHINE_FALLBACK_ENABLED (Optional)

Whether falling back to wayback machine after a failed request is enabled (defaults to true).

Meta field to enable/disable this per request is: wayback_machine_fallback_enabled

WAYBACK_MACHINE_PROXY_ENABLED (Optional)

Whether proxying to wayback machine before a request is made is enabled (defaults to false).

Meta field to enable/disable this per request is: wayback_machine_proxy_enabled

WAYBACK_MACHINE_PROXY_FALLTHROUGH_ENABLED (Optional)

Whether when proxying to wayback machine and an error occurs, that the request should continue to the original URL as per normal (defaults to true). Note that this will not have an effect if the wayback machine proxy is not enabled first.

Meta field to enable/disable this per request is: wayback_machine_proxy_fallthrough_enabled

Usage example :eyes:

In order to use this plugin simply add the following settings and substitute your variables:

DOWNLOADER_MIDDLEWARES = {
    "waybackmiddleware.middleware.WaybackMachineDownloaderMiddleware": 630
}

This will immediately allow you begin using the wayback machine as a fallback when one of your requests fail. In order to use it as a proxy you can add the following to your settings:

WAYBACK_MACHINE_PROXY_ENABLED = True

This will make every request hit the wayback machine for a response first, before hitting the original server. If you want to avoid hitting the original server entirely, put the following in your settings (as well as the above):

WAYBACK_MACHINE_PROXY_FALLTHROUGH_ENABLED = False

This will ensure that your scraper never hits the original servers, just what has been recorded by the wayback machine.

Whenever you receive a response from the wayback machine middleware, it will use the class WaybackMachineResponse. It subclasses scrapy.http.HtmlResponse so you can use it like a normal response, however it has some other goodies:

def parse(self, response):
    while True:
        if response is None:
            return
        print(f"Response {response.request.url} at {response.timestamp.isoformat()}")
        response = response.earlier_response()

This will allow you to go through the history one by one to get the earlier snapshots of the page. If you are interested in the response that the wayback middleware recovered, use the original_response attribute.

License :memo:

The project is available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-wayback-1.0.1.tar.gz (4.9 kB view details)

Uploaded Source

File details

Details for the file scrapy-wayback-1.0.1.tar.gz.

File metadata

  • Download URL: scrapy-wayback-1.0.1.tar.gz
  • Upload date:
  • Size: 4.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.6

File hashes

Hashes for scrapy-wayback-1.0.1.tar.gz
Algorithm Hash digest
SHA256 8962a6ccf179d5b1bc5e2f73a3143407c07ac61f256dea3e427a6d7bce78cde3
MD5 76071ad979beca9b7008d7e181291c9a
BLAKE2b-256 1b3994bdd7c4fe3ab1765af43ee15b4942a201e1107164f3d8efac297858e069

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page