Skip to main content

Scrapy Middleware that allows a Scrapy Spider to filter requests.

Project description

Scrapy-link-filter

Python ver Build Status Code coverage Code style: black

Spider Middleware that allows a Scrapy Spider to filter requests. There is similar functionality in the CrawlSpider already using Rules and in the RobotsTxtMiddleware, but there are twists. This middleware allows defining rules dinamically per request, or as spider arguments instead of project settings.

Install

This project requires Python 3.6+ and pip. Using a virtual environment is strongly encouraged.

$ pip install git+https://github.com/croqaz/scrapy-link-filter

Usage

For the middleware to be enabled as a Spider Middleware, it must be added in the project settings.py:

SPIDER_MIDDLEWARES = {
    # maybe other Spider Middlewares ...
    # can go after DepthMiddleware: 900
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 950,
}

Or, it can be enabled as a Downloader Middleware, in the project settings.py:

DOWNLOADER_MIDDLEWARES = {
    # maybe other Downloader Middlewares ...
    # can go before RobotsTxtMiddleware: 100
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 50,
}

The rules must be defined either in the spider instance, in a spider.extract_rules dict, or per request, in request.meta['extract_rules']. Internally, the extract_rules dict is converted into a LinkExtractor, which is used to match the requests.

Note that the URL matching is case-sensitive by default, which works in most cases. To enable case-insensitive matching, you can specify a "(?i)" inline flag in the beggining of each "allow", or "deny" rule that needs to be case-insensitive.

Example of a specific allow filter, on a spider instance:

from scrapy.spiders import Spider

class MySpider(Spider):
    extract_rules = {"allow_domains": "example.com", "allow": "/en/items/"}

Or a specific deny filter, inside a request meta:

request.meta['extract_rules'] = {
    "deny_domains": ["whatever.com", "ignore.me"],
    "deny": ["/privacy-policy/?$", "/about-?(us)?$"]
}

The possible fields are:

  • allow_domains and deny_domains - one, or more domains to specifically limit to, or specifically reject
  • allow and deny - one, or more sub-strings, or patterns to specifically allow, or reject

All fields can be defined as string, list, set, or tuple.


License

BSD3 © Cristi Constantin.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for scrapy-link-filter, version 0.2.0
Filename, size File type Python version Upload date Hashes
Filename, size scrapy_link_filter-0.2.0-py3-none-any.whl (6.2 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size scrapy-link-filter-0.2.0.tar.gz (5.4 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page