Skip to main content

Scrapy decorator for inline requests

Project description

This module provides a decorator to allow to write Scrapy’s spider callbacks which performs multiple requests without the need to write multiple callbacks for each request.

The code is experimental, might not work in all cases and even might be hard to debug.


from inline_requests import inline_requests

class MySpider(CrawlSpider):


  def parse_item(self, response):
    item = self.build_item(response)

    # scrape more information
    response = yield Request(response.url + '?info')
    item['info'] = self.extract_info(response)

    # scrape pictures
    response = yield Request(response.url + '?pictures')
    item['pictures'] = self.extract_pictures(response)

    # a request that might fail (dns error, network timeout, error 404/500, etc)
      response = yield Request(response.url + '?protected')
    except Exception as e:
      log.err(e, spider=self)
      item['protected'] = self.extract_protected_info(response)

    # finally yield the item
    yield item

Example Project

The example directory includes a example spider for

cd example
scrapy crawl stackoverflow


  • Python 2.6+

  • Scrapy 0.14+

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-inline-requests-0.1.2.tar.gz (2.6 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page