Scrapy decorator for inline requests
Project description
This module provides a decorator to allow to write Scrapy’s spider callbacks which performs multiple requests without the need to write multiple callbacks for each request.
The code is experimental, might not work in all cases and even might be hard to debug.
Example:
from inline_requests import inline_requests
class MySpider(CrawlSpider):
...
@inline_requests
def parse_item(self, response):
item = self.build_item(response)
# scrape more information
response = yield Request(response.url + '?info')
item['info'] = self.extract_info(response)
# scrape pictures
response = yield Request(response.url + '?pictures')
item['pictures'] = self.extract_pictures(response)
# a request that might fail (dns error, network timeout, error 404/500, etc)
try:
response = yield Request(response.url + '?protected')
except Exception as e:
log.err(e, spider=self)
else:
item['protected'] = self.extract_protected_info(response)
# finally yield the item
yield item
Example Project
The example directory includes a example spider for StackOverflow.com:
cd example scrapy crawl stackoverflow
Requirements
Python 2.6+
Scrapy 0.14+
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for scrapy-inline-requests-0.1.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b3e991b5b1644bc025b2938f5ae726a152cbdb8bd29fe540de6423aadceb978 |
|
MD5 | 95ae110521a3493a99b23f3e954b9007 |
|
BLAKE2b-256 | bccfc11635cd2a707f0947fe071336cc039b43139d64487be3950de99430817d |