Skip to main content

Scrapers and web interface.

Project description


GSICrawler is a service that extracts information from several sources, such as Twitter, Facebook and news outlets.

GSICrawler uses these services under the hood:

  • The HTTP API for the scrapers/tasks (web). This is the public-facing part, the one with which you will interact a user.
  • A frontend for celery (flower)
  • A backend that takes care of the tasks (celery)
  • A broker for the celery backend (redis)

There are several scrapers available, each accepts a different set of parameters (e.g. a query, a maximum number of results, etc.). The results of any scraper can be returned in JSON format, or stored in an elasticsearch server. Some results will take long to process. If that is the case, the API will return information about the running task, so you can query the service for the result later. Please, read the API specification for your scraper of interest.


# Scrape NYTimes for articles containing "terror", and store it in an elasticsearch endpoint (`http://elasticsearch:9200/crawler/news`).
$ curl -X GET --header 'Accept: application/json' ''

  "parameters": {
    "number": 5,
    "output": "elasticsearch",
    "query": "terror"
  "source": "NYTimes",
  "status": "PENDING",
  "task_id": "bf5dd994-9860-4c63-975e-d09fb85a463c"

# The task
$ curl --header 'Accept: application/json' '' 

  "results": "Check your results at: elasticsearch/crawler/_search",
  "status": "SUCCESS",
  "task_id": "bf5dd994-9860-4c63-975e-d09fb85a463c"


Some of the crawlers require API keys and secrets to work. You can configure the services locally with a .env file in this directory. It should look like this:


Once the environment variables are in place, run:

docker compose up

This will start all the necessary services, with the default configuration. Additionally, it will deploy an elasticsearch instance, which can be used to store the results of the crawler.

You can test the service in your browser, using the OpenAPI dashboard: http://localhost:5000/

Scaling and distribution

For ease of deployment, the GSICrawler docker image runs three services in a single container (web, flower and celery backend). However, this behavior can be changed by using a different command (by default, it's all) and setting the appropriate environment variables:

# If results_backend is missing, GSICRAWLER_BROKER will be used

Developing new scrapers

As of this writing, to add a new scraper to GSICrawler you need to:

  • Develop the scraping function
  • Add a task to the gsicrawler/ file
  • Add the task to the controller (gsicrawler/controllers/
  • Add the new endpoint to the API (gsicrawler-api.yaml).
  • If you are using environment variables (e.g. for an API key), add them to your .env file.

If you are also deploying this with CI/CD and/or Kubernetes:


Elasticsearch may crash on startup and complain about vm.max_heap_count. This will solve it temporarily, until the next boot:

sudo sysctl -w vm.max_map_count=262144 

If you want to make this permanent, set the value in your /etc/sysctl.conf.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gsicrawler-0.2.0.tar.gz (5.6 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page