Skip to main content

Scrapy-based Web Crawler with an UI

Project description


Arachnado is a tool to crawl a specific website. It provides a Tornado-based HTTP API and a web UI for a Scrapy-based crawler.

License is MIT.


Arachnado requires Python 2.7. To install Arachnado use pip:

pip install arachnado

To install Arachnado with MongoDB support use this command:

pip install arachnado[mongo]


To start Arachnado execute arachnado command:


and then visit (or whatever URL is configured).

To see available command-line options use

arachnado –help

Arachnado can be configured using a config file. Put it to one of the common locations (‘/etc/arachnado.conf’, ‘~/.config/arachnado.conf’ or ‘~/.arachnado.conf’) or pass the file name as an argument when starting the server:

arachnado --config ./my-config.conf

For available options check


To build Arachnado static assets node.js + npm are required. Install all JavaScript requirements using npm - run the following command from the repo root:

npm install

then rebuild static files (we use Webpack):

npm run build

or auto-build static files on each change during development:

npm run watch


0.2 (2015-08-07)

Initial release.

Project details

Release history Release notifications | RSS feed

This version


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arachnado-0.2.tar.gz (140.0 kB view hashes)

Uploaded source

Built Distribution

arachnado-0.2-py2-none-any.whl (156.3 kB view hashes)

Uploaded 2 7

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page