Skip to main content

Scrapy-based Web Crawler with an UI

Project description


Arachnado is a tool to crawl a specific website. It provides a Tornado-based HTTP API and a web UI for a Scrapy-based crawler.

License is MIT.


Arachnado requires Python 2.7. To install Arachnado use pip:

pip install arachnado

To install Arachnado with MongoDB support use this command:

pip install arachnado[mongo]


To start Arachnado execute arachnado command:


and then visit (or whatever URL is configured).

To see available command-line options use

arachnado –help

Arachnado can be configured using a config file. Put it to one of the common locations (‘/etc/arachnado.conf’, ‘~/.config/arachnado.conf’ or ‘~/.arachnado.conf’) or pass the file name as an argument when starting the server:

arachnado --config ./my-config.conf

For available options check


To build Arachnado static assets node.js + npm are required. Install all JavaScript requirements using npm - run the following command from the repo root:

npm install

then rebuild static files (we use Webpack):

npm run build

or auto-build static files on each change during development:

npm run watch


0.2 (2015-08-07)

Initial release.

Project details

Release history Release notifications

This version
History Node


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
arachnado-0.2-py2-none-any.whl (156.3 kB) Copy SHA256 hash SHA256 Wheel 2.7 Aug 7, 2015
arachnado-0.2.tar.gz (140.0 kB) Copy SHA256 hash SHA256 Source None Aug 7, 2015

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page