Skip to main content

Scrapy Middleware that allows a Scrapy Spider to filter requests.

Project description


Python ver Build Status Code coverage Code style: black

Spider Middleware that allows a Scrapy Spider to filter requests. There is similar functionality in the CrawlSpider already using Rules and in the RobotsTxtMiddleware, but there are twists. This middleware allows defining rules dinamically per request, or as spider arguments instead of project settings.


This project requires Python 3.6+ and pip. Using a virtual environment is strongly encouraged.

$ pip install git+


For the middleware to be enabled as a Spider Middleware, it must be added in the project

    # maybe other Spider Middlewares ...
    # can go after DepthMiddleware: 900
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 950,

Or, it can be enabled as a Downloader Middleware, in the project

    # maybe other Downloader Middlewares ...
    # can go before RobotsTxtMiddleware: 100
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 50,

The rules must be defined either in the spider instance, in a spider.extract_rules dict, or per request, in request.meta['extract_rules']. Internally, the extract_rules dict is converted into a LinkExtractor, which is used to match the requests.

Note that the URL matching is case-sensitive by default, which works in most cases. To enable case-insensitive matching, you can specify a "(?i)" inline flag in the beggining of each "allow", or "deny" rule that needs to be case-insensitive.

Example of a specific allow filter, on a spider instance:

from scrapy.spiders import Spider

class MySpider(Spider):
    extract_rules = {"allow_domains": "", "allow": "/en/items/"}

Or a specific deny filter, inside a request meta:

request.meta['extract_rules'] = {
    "deny_domains": ["", ""],
    "deny": ["/privacy-policy/?$", "/about-?(us)?$"]

The possible fields are:

  • allow_domains and deny_domains - one, or more domains to specifically limit to, or specifically reject
  • allow and deny - one, or more sub-strings, or patterns to specifically allow, or reject

All fields can be defined as string, list, set, or tuple.


BSD3 © Cristi Constantin.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for scrapy-link-filter, version 0.2.0
Filename, size File type Python version Upload date Hashes
Filename, size scrapy_link_filter-0.2.0-py3-none-any.whl (6.2 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size scrapy-link-filter-0.2.0.tar.gz (5.4 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page