Skip to main content

Python library for scraping inside Airflow.

Project description

as-scraper

PyPI - Python Version PyPI - Downloads

Python library for scraping inside Airflow.

Installation

The as-scraper library uses Geckodriver (Firefox) for scraping with the Selenium library. In order to use it, you need to have an airflow image having the Geckodriver dependency.

We have the as-airflow Docker image for you to have airflow ready with the Geckodriver dependency.

To use this library follow the next steps:

1. Download the docker-compose.yml file from the Airflow docs.

Airflow provides the docker-compose.yml file you need for this library.

You can directly copy the docker-compose.yml file from here or run the following command to download it:

curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.3.4/docker-compose.yaml'

2. Modify the docker-compose.yml file to use the as-airflow image.

There are two ways of configuring the required docker image for this library.

Option a. Create a Dockerfile that extends from the almiavicas/as-airflow image.

To do this, simply go into the docker-compose.yml file, comment the image line and uncomment the build tag:

...
version: '3'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  # image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.3.4}
  build: .
  ...

Then create your Dockerfile and copy and paste the following lines:

FROM almiavicas/as-airflow:2.2.3

RUN pip install --no-cache-dir as-scraper

Option b. Modify the docker-compose.yaml to install the library.

To do this, go to the docker-compose.yml file and make the following changes:

...
version: '3'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  image: ${AIRFLOW_IMAGE_NAME:-almiavicas/as-airflow:2.2.3}
  # build: .
  environment:
    ...
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-as-scraper}

And that's it! You can now start using the as-scraper library.

Usage

If you are starting a new Airflow project, before running your containers you need to run the following command to configure volumes:

mkdir dags/ logs/ plugins/

You can now run docker-compose up and you'll have your Airflow environment up & running.

Creating a simple scraper

Lets say that we want to scrap yellowpages.com. Our target data would be the popular cities that we can find in the sitemap url.

Our output data will have two columns: name of the city and url which is linked to the city. For example, for Houston, we would want the following output:

name url
Houston https://www.yellowpages.com/houston-tx

Declaring our Scraper Class

So first we create a scraper that extends from the Scraper class, and define the COLUMNS variable to ['name', 'url'].

Create the plugins/scrapers/yellowpages.py file and type the following code into it:

from as_scraper.base.scraper import Scraper


class YellowPagesScraper(Scraper):
    COLUMNS = ['name', 'url']

Deciding wether to load javascript or not

Now, there are two execution options when running scrapers. We can either load javascript which uses the Selenium library, or not load javascript and use the requests library for http requests.

For this example, let's go ahead and use the Selenium library. To configure this, simply add the following variable to your scraper:

from as_scraper.base.scraper import Scraper


class YellowPagesScraper(Scraper):
    COLUMNS = ['name', 'url']
    LOAD_JAVASCRIPT = True

Defining the scrape_handler

And the magic comes in the next step. We will define the scrape_handler method in our class, which will have the responsibility to scrape a given url and extract the data from it.

All scrapers must define the scrape_handler method.

from typing import Optional
from selenium.webdriver import Firefox
from selenium.webdriver.common.by import By
import pandas as pd
from as_scraper.base.scraper import Scraper


class YellowPagesScraper(Scraper):
    COLUMNS = ['name', 'url']
    LOAD_JAVASCRIPT = True

    def scrape_handler(self, url: str, html: Optional[str] = None, driver: Optional[Firefox] = None, **kwargs) -> pd.DataFrame:
        rows = []
        div_tag = driver.find_element(By.CLASS_NAME, "row-content")
        div_tag = div_tag.find_element(By.CLASS_NAME, "row")
        section_tags = div_tag.find_elements(By.TAG_NAME, "section")
        for section_tag in section_tags:
            a_tags = section_tag.find_elements(By.TAG_NAME, "a")
            for a_tag in a_tags:
                city_name = a_tag.text
                city_url = a_tag.get_attribute("href")
                rows.append({"name": city_name, "url": city_url})
        df = pd.DataFrame(rows, columns=self.COLUMNS)
        return df

Creating the DAG.

Now we want to create a DAG that will trigger the scraper. For that we will use the ScraperToLogsOperator.

As we mentioned before, the target url for our scraper is the https://www.yellowpages.com/sitemap. In the Dag definition file we will define the url that we want to scrape.

There are other ways of specifying urls based on a discovery strategy. However, for this example it's not required.

Create the dags/yellowpages.py file and copy the following content into it:

from datetime import datetime, timedelta
from airflow.models import DAG
from plugins.scrapers.yellowpages import YellowPagesScraper
from as_scraper.operators import ScraperToLogsOperator


with DAG(
    dag_id="yellow_pages",
    catchup=False,
    default_args={
        'depends_on_past': False,
        'email': ['airflow@example.com'],
        'email_on_failure': False,
        'email_on_retry': False,
        'retries': 1,
        'retry_delay': timedelta(minutes=5),
    },
    description="A simple Scraper DAG",
    schedule_interval=timedelta(days=1),
    start_date=datetime(2022, 8, 4),
    catchup=False,
) as dag:
    t1 = ScraperToLogsOperator(
        scraper_cls=YellowPagesScraper,
        urls=['https://www.yellowpages.com/sitemap'],
        task_id='scrape',
        save_errors=True,
    )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

as-scraper-1.2.1.tar.gz (13.6 kB view details)

Uploaded Source

Built Distribution

as_scraper-1.2.1-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file as-scraper-1.2.1.tar.gz.

File metadata

  • Download URL: as-scraper-1.2.1.tar.gz
  • Upload date:
  • Size: 13.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for as-scraper-1.2.1.tar.gz
Algorithm Hash digest
SHA256 1e36a67a7b1e12fc968409c63bf696eb1d34e933df431149400a1472a7854a26
MD5 addb844943774b9306b28e0af9cf10e1
BLAKE2b-256 67ccc8a106d2d9383584f5c1a25070ddb674764ca18335ef36fa1d0997f2fb13

See more details on using hashes here.

File details

Details for the file as_scraper-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: as_scraper-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for as_scraper-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b946788cce5e67af7b536112b8d56580b03ee781cd64289bf5d9f2fdb55f96cf
MD5 3b263974bafcd6198618587facbfce50
BLAKE2b-256 d2a50a600001a79837f33feae039c992eaf880dd01e13ed6f5038c6963ba541d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page