Put Scrapy spiders behind an HTTP API
Project description
HTTP server which provides API for scheduling Scrapy spiders and making requests with spiders.
Allows you to easily add HTTP API to your existing Scrapy project. All Scrapy project components (e.g. middleware, pipelines, extensions) are supported out of the box. You simply run Scrapyrt in Scrapy project directory and it starts HTTP server allowing you to schedule your spiders and get spider output in JSON format.
Documentation
Documentation is available here: http://scrapyrt.readthedocs.org/en/latest/index.html
Support
Open source support is provided here in Github. Please create a question issue (ie. issue with “question” label).
Commercial support is also available by Scrapinghub.
Development
Release
Use bumpversion tool, e.g. to release minor version do:
bumpversion minor --verbose git push origin master git push origin <new_version_tag>
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for scrapyrt-0.11.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5fb05b27bda1b6b270aac40bc827ed31da5b17a92864069cbcda6dc489ceb90b |
|
MD5 | bc7a99a624366bfbead69979763ee8c0 |
|
BLAKE2b-256 | ecf3e6010c5cc59c7acdce8589e696d3d8546985155107294f2a92ee3086da5b |