Skip to main content

Scrapy-based Web Crawler with an UI

Project description


Arachnado is a tool to crawl a specific website. It provides a Tornado-based HTTP API and a web UI for a Scrapy-based crawler.

License is MIT.


Arachnado requires Python 2.7. To install Arachnado use pip:

pip install arachnado

To install Arachnado with MongoDB support use this command:

pip install arachnado[mongo]


To start Arachnado execute arachnado command:


and then visit (or whatever URL is configured).

To see available command-line options use

arachnado –help

Arachnado can be configured using a config file. Put it to one of the common locations (‘/etc/arachnado.conf’, ‘~/.config/arachnado.conf’ or ‘~/.arachnado.conf’) or pass the file name as an argument when starting the server:

arachnado --config ./my-config.conf

For available options check


To build Arachnado static assets node.js + npm are required. Install all JavaScript requirements using npm - run the following command from the repo root:

npm install

then rebuild static files (we use Webpack):

npm run build

or auto-build static files on each change during development:

npm run watch


0.2 (2015-08-07)

Initial release.

Project details

Release history Release notifications | RSS feed

This version


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for arachnado, version 0.2
Filename, size File type Python version Upload date Hashes
Filename, size arachnado-0.2-py2-none-any.whl (156.3 kB) File type Wheel Python version 2.7 Upload date Hashes View
Filename, size arachnado-0.2.tar.gz (140.0 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page