Skip to main content

Package for buildout based scrapy spider development

Project description

This package provides some core component for buildout based scrapy spider development. Such scrapy spider packages can get installed, scheduled and processed with the mongodb based s01.worker daemon using the JSON-RPC proxy located in the s01.client package. The package also provides some recipes which allows to use external files as scrapy settings.

CHANGES

0.16.2 (2012-11-18)

  • make it compatible with scrapy 0.16.2. Changed API, replace the missing settings export with export the settings to s01.scrapy.util as settings property. This is required because we reference the settings for setup a global mongodb connnection for logging

0.14.4.1 (2012-07-01)

  • implemented support for hookup a custom logger by suporting a bootstrap recipe section.

0.14.4 (2012-06-30)

  • make it compatible with scrapy 0.14.4, adjust cmdline

0.12.4 (2011-09-10)

  • bugfix: fix bad escape if x is used in path on windows

0.12.3 (2011-08-29)

  • implemented httpConverter converter which onlyallows http and https uris

  • improve email and uriConverter, implement validate and return None if not a valid format is used

0.12.2 (2011-08-27)

  • implemented new test recipe which is able to setup a zope.testrunner including scrapy settings

0.12.1 (2011-08-25)

  • implemented development helper scripts which can dump log and tmp data exported with our TestExporter

0.12.0 (2011-08-19)

  • nail scrapy release version to 0.12.0.2546

  • removed unused dependencies and imports

  • initial release

0.0.7 (2011-01-02)

  • alpha version released for development and testing the tool chain

  • write logging.Error to sys.stderr where we can read in subprocess

0.0.6 (2010-12-31)

  • alpha version released for development and testing the tool chain

  • print logging.ERROR to stdout which is required for error handling in subprocess

0.0.5 (2010-12-29)

  • alpha version released for development and testing the tool chain

  • implemented a different scrapy item and field concept. Use a field property and not a dict based item and field. Implemented ScrapyFieldProperty and a ScrapyItemBase class. Added tests for show how the scrapy item and field works including converter and serializer.

  • implemented new extractor which can handle the new scrapy item and field concept

  • implemented different basic ScrapyFieldProperty converter methods

0.0.4 (2010-12-22)

  • alpha version released for development and testing the tool chain

  • remove spider name from crawl recipe

0.0.3 (2010-11-29)

  • alpha version released for development and testing the tool chain

  • fix hex data parts in settings content

  • use s01.worker as default logging handler name

0.0.2 (2010-11-29)

  • alpha version released for development and testing the tool chain

  • added settings recipe

0.0.1 (2010-11-21)

  • alpha version released for development and testing the tool chain

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

s01.scrapy-0.16.2.zip (33.4 kB view details)

Uploaded Source

File details

Details for the file s01.scrapy-0.16.2.zip.

File metadata

  • Download URL: s01.scrapy-0.16.2.zip
  • Upload date:
  • Size: 33.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for s01.scrapy-0.16.2.zip
Algorithm Hash digest
SHA256 39088af344868858d015f20e36f04ae6821a84ac5157afa24da6b33b3d71f6d8
MD5 91f88ff2faf529a0248ffa324d1ad5a8
BLAKE2b-256 bf2a67ced98607a8473fb2cf7b237de31e94cb962fe409d45addeb28ca4192bf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page