Skip to main content

Put Scrapy spiders behind an HTTP API

Project description

https://raw.githubusercontent.com/scrapinghub/scrapyrt/master/artwork/logo.gif

ScrapyRT (Scrapy realtime)

https://github.com/scrapinghub/scrapyrt/workflows/CI/badge.svg https://img.shields.io/pypi/pyversions/scrapyrt.svg https://img.shields.io/pypi/v/scrapyrt.svg https://img.shields.io/pypi/l/scrapyrt.svg Downloads count https://readthedocs.org/projects/scrapyrt/badge/?version=latest

Add HTTP API for your Scrapy project in minutes.

You send a request to ScrapyRT with spider name and URL, and in response, you get items collected by a spider visiting this URL.

  • All Scrapy project components (e.g. middleware, pipelines, extensions) are supported

  • You run Scrapyrt in Scrapy project directory. It starts HTTP server allowing you to schedule spiders and get spider output in JSON.

Quickstart

1. install

> pip install scrapyrt

2. switch to Scrapy project (e.g. quotesbot project)

> cd my/project_path/is/quotesbot

3. launch ScrapyRT

> scrapyrt

4. run your spiders

> curl "localhost:9080/crawl.json?spider_name=toscrape-css&url=http://quotes.toscrape.com/"

5. run more complex query, e.g. specify callback for Scrapy request and zipcode argument for spider

>  curl --data '{"request": {"url": "http://quotes.toscrape.com/page/2/", "callback":"some_callback"}, "spider_name": "toscrape-css", "crawl_args": {"zipcode":"14000"}}' http://localhost:9080/crawl.json -v

Scrapyrt will look for scrapy.cfg file to determine your project settings, and will raise error if it won’t find one. Note that you need to have all your project requirements installed.

Note

  • Project is not a replacement for Scrapyd or Scrapy Cloud or other infrastructure to run long running crawls

  • Not suitable for long running spiders, good for spiders that will fetch one response from some website and return items quickly

Documentation

Documentation is available on readthedocs.

Support

Open source support is provided here in Github. Please create a question issue (ie. issue with “question” label).

Commercial support is also available by Zyte.

License

ScrapyRT is offered under BSD 3-Clause license.

Development

Development taking place on Github.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapyrt-with-params-0.13.2.tar.gz (29.9 kB view details)

Uploaded Source

Built Distribution

scrapyrt_with_params-0.13.2-py2.py3-none-any.whl (36.7 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file scrapyrt-with-params-0.13.2.tar.gz.

File metadata

  • Download URL: scrapyrt-with-params-0.13.2.tar.gz
  • Upload date:
  • Size: 29.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.3

File hashes

Hashes for scrapyrt-with-params-0.13.2.tar.gz
Algorithm Hash digest
SHA256 d1f8e8ef3dad7e70c7f9ef52fd59d6686f685ab019a2f1251e1d0beeab717212
MD5 b7c6e5fe2b1d5a88988971f1869082b4
BLAKE2b-256 f8d703ba0f557a886b917d169d8616a5e7b7dc7864ecf2ceb122f6bc0d33c4c3

See more details on using hashes here.

File details

Details for the file scrapyrt_with_params-0.13.2-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for scrapyrt_with_params-0.13.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 23ca75e89766b65a40fe588a7284aa8dc5e97f61da31de5f604cd4e1e2879dc1
MD5 f84d2ae84752d773c78ababec5e2d9c1
BLAKE2b-256 6cc822a3186e8af54954d0c050fbf865036871c7fe118341f1ff47a76fbc0a60

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page