Skip to main content

An alternative to the built-in ItemLoader of Scrapy which focuses on maintainability of fallback parsers.

Project description

https://img.shields.io/pypi/pyversions/scrapy-loader-upkeep.svg https://img.shields.io/pypi/v/scrapy-loader-upkeep.svg https://travis-ci.org/BurnzZ/scrapy-loader-upkeep.svg?branch=master https://codecov.io/gh/BurnzZ/scrapy-loader-upkeep/branch/master/graph/badge.svg

Overview

This improves over the built-in ItemLoader of Scrapy by adding features that focuses on the maintainability of the spider over time.

This allows developers to keep track of how often parsers are being used on a crawl, allowing to safely remove obsolete css/xpath fallback rules.

Motivation

Scrapy supports adding multiple css/xpath rules in its ItemLoader by default in order to provide a convenient way for developers to keep up with site changes.

However, some sites change layouts more often than others, while some perform A/B tests for weeks/months where developers need to accommodate those changes.

These fallback css/xpath rules gets obsolete quickly and fills up the project with potentially dead code, posing a threat to the spiders’ long term maintenance.

Original idea proposal: https://github.com/scrapy/scrapy/issues/3795

Usage

from scrapy_loader_upkeep import ItemLoader

class SiteItemLoader(ItemLoader):
    pass

Using it inside a spider callback would look like:

def parse(self, response):
    loader = SiteItemLoader(response=response, stats=self.crawler.stats)

Nothing would change in the usage of this ItemLoader except for the part on injecting stat dependency to it, which is necessary to keep track of the usage of the parser rules.

This only works for the following ItemLoader methods:

  • add_css()

  • replace_css()

  • add_xpath()

  • replace_xpath()

Basic Spider Example

This is taken from the examples/ directory.

$ scrapy crawl quotestoscrape_simple_has_missing

This should output in the stats:

2019-06-16 14:32:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{ ...
  'parser/QuotesItemLoader/author/css/1': 10,
  'parser/QuotesItemLoader/quote/css/1/missing': 10,
  'parser/QuotesItemLoader/quote/css/2': 10
  ...
}

In this example, we could see that the 1st css rule for the quote field has had instances of not being matched at all during the scrape.

New Feature

As with the example above, we’re limited only to the positional context of when the add_css(), add_xpath(), etc were called during the execution.

There will be cases where developers will be maintaining a large spider with a lot of different parsers to handle varying layouts in the site. It would make sense to have a better context to what a parser does or is for.

A new optional name parameter is supported to provide more context around a given parser. This supports the two (2) main types of creating fallback parsers:

  1. multiple calls

loader.add_css('NAME', 'h1::text', name='Name from h1')
loader.add_css('NAME', 'meta[value="title"]::attr(content)', name="Name from meta tag")

would result in something like:

{ ...
  'parser/QuotesItemLoader/NAME/css/1/Name from h1': 8,
  'parser/QuotesItemLoader/NAME/css/1/Name from h1/missing': 2,
  'parser/QuotesItemLoader/NAME/css/2/Name from meta tag': 7,
  'parser/QuotesItemLoader/NAME/css/2/Name from meta tag/missing': 3,
  ...
}
  1. grouped parsers in a single call

loader.add_css(
    'NAME',
    [
        'h1::text',
        'meta[value="title"]::attr(content)',
    ],
    name='NAMEs at the main content')
loader.add_css(
    'NAME',
    [
        'footer .name::text',
        'div.page-end span.name::text',
    ],
    name='NAMEs at the bottom of the page')

would result in something like:

{ ...
  'parser/QuotesItemLoader/NAME/css/1/NAMEs at the main content': 8,
  'parser/QuotesItemLoader/NAME/css/1/NAMEs at the main content/missing': 2,
  'parser/QuotesItemLoader/NAME/css/2/NAMEs at the main content': 7,
  'parser/QuotesItemLoader/NAME/css/2/NAMEs at the main content/missing': 3,
  'parser/QuotesItemLoader/NAME/css/3/NAMEs at the bottom of the page': 8,
  'parser/QuotesItemLoader/NAME/css/3/NAMEs at the bottom of the page/missing': 2,
  'parser/QuotesItemLoader/NAME/css/4/NAMEs at the bottom of the page': 7,
  'parser/QuotesItemLoader/NAME/css/4/NAMEs at the bottom of the page/missing': 3,
  ...
}

The latter is useful in grouping fallback parsers together if they are quite related in terms of layout/arrangement in the page.

Requirements

Python 3.6+

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-loader-upkeep-0.1.1.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

scrapy_loader_upkeep-0.1.1-py3-none-any.whl (6.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapy-loader-upkeep-0.1.1.tar.gz.

File metadata

  • Download URL: scrapy-loader-upkeep-0.1.1.tar.gz
  • Upload date:
  • Size: 5.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/44.0.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.0

File hashes

Hashes for scrapy-loader-upkeep-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e379506334e510d36adeaab6571993637d89951696aa2959f35ccdc34b2e7888
MD5 3853153add06d5c414004be58178d0a7
BLAKE2b-256 d712de44117f7ac5b61bb1a43c57d022b928c4423ab98a015639696d3e980386

See more details on using hashes here.

File details

Details for the file scrapy_loader_upkeep-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: scrapy_loader_upkeep-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/44.0.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.0

File hashes

Hashes for scrapy_loader_upkeep-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0cafb6ebf3f89a370d3d332e6f3baf4e986452351bd9f1840b7c48f02ffee670
MD5 71bbe42dc2fa28577b94a64248856895
BLAKE2b-256 bb57349bb8a9ed72961e03161ea074d97f5ec285559a3424a22897046f5cfc7c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page