Skip to main content

Yet another Python web scraping application

Project description

Build Status Coverage Status PyPI version Code style: black

ScrapeMeAgain

ScrapeMeAgain is a Python 3 powered web scraper. It uses multithreading (ThreadPoolExecutor) and multiprocessing to get the work done quicker and stores collected data in an SQLite database.

Installation

pip install scrapemeagain

System requirements

Tor in combination with Privoxy are used for anonymity (i.e. regular IP address changes).

Using Docker and Docker Compose is the preferred (and easier) way to use ScrapeMeAgain.

You have to manually install and setup Tor and Privoxy on your system if not using Docker. For further information about installation and configuration refer to:

Usage

You have to provide your own database table description and an actual scraper class which must follow the BaseScraper interface. See examples/examplescraper for more details.

Dockerized

With Docker it is possible to use multiple Tor IPs at the same time and, unless you abuse it, scrape data faster.

The easiest way to start is to duplicate examples/examplescraper and then update, rename, expand, etc. your scraper and related classes.

Your scraper must define config.py and main_dockerized.py. These two names are hardcoded throughout the codebase.

scrapemeagain-compose.py dynamically creates a docker-compose.yml which orchestrates scraper instances. The idea is that the first scraper (scp1) is a master scraper and hence is the host for a couple of helper services which communicate over HTTP (see dockerized/apps).

  1. Get Docker host Ip
ip addr show docker0

NOTE Your Docker interface name may be different from docker0.

  1. Run examplesite on Docker host IP
python3 examples/examplescraper/examplesite/app.py 172.17.0.1

NOTE Your Docker host IP may be different from 172.17.0.1.

  1. Start docker-compose
scrapemeagain-compose.py -s $(pwd)/examples/examplescraper -c tests.integration.fake_config | docker-compose -f - up

NOTE A special config file path is provided: -c tests.integration.fake_config. This is required only for test/demo purposes. You don't have to provide specific config for a real/production scraper.

Local

  1. Run examplesite
python3 examples/examplescraper/examplesite/app.py
  1. Start examplescraper
python3 examples/examplescraper/main.py

NOTE You may need to update your PYTHONPATH, e.g. export PYTHONPATH=$PYTHONPATH:$(pwd)/examples.

Development

To simplify running integration tests with latest changes:

  • replace image: dusanmadar/scrapemeagain:x.y.z with image: scp:latest in the scrapemeagain/dockerized/docker-compose.yml template

  • and make sure to rebuild the image locally before running tests, e.g.

docker build . -t scp:latest; python -m unittest discover -p test_integration.py

Legacy

The Python 2.7 version of ScrapeMeAgain, which also provides geocoding capabilities, is available under the legacy branch and is no longer maintained.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapemeagain-1.0.5.tar.gz (27.4 kB view hashes)

Uploaded Source

Built Distribution

scrapemeagain-1.0.5-py3-none-any.whl (35.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page