recursive-scroll-scrape module.
Project description
Recursive-Scroll-Scraper
Python library for automating scrolling and downloading web pages via Selenium. Scrolling and downloading functionality is provided by the tq-scroll-scrape package.
Recursive-Scroll-Scraper provides the ability to download a paginated site, i.e. starting at the root page, getting the next page url, downloading that page, and so on until the end is reached.
sample_app.py
demonstrates this use case using the Trulia real estate listings site.
Usage
Using ChromeDriver
Download ChromeDriver from https://chromedriver.chromium.org/downloads. Choose the version that matches the Chrome browser running on your system.
Using GeckoDriver for Firefox
Download GeckoDriver for Firefox from https://github.com/mozilla/geckodriver/releases.
Install Package
Install the package by running pip install tq-recursive-scroll-scrape
.
Use the Package
Here is sample code demonstrating how to crawl a paginated site.
Create the RecursiveScrollScrape instance
from tq_recursive_scroll_scrape.recursive_scroll_and_scrape import RecursiveScrollScrape
root_url = "https://www.trulia.com"
first_url = f"{root_url}/WA/Renton"
driver_path = "PATH TO DRIVER EXECUTABLE"
scroll_scraper = RecursiveScrollScrape(driver_path)
Define the Logic to Get the Next Page Links
Provide a callback containing the logic to get the next page links. Since this function is called recursively, be sure to provide a terminating condition to avoid infinite loops.
from bs4 import BeautifulSoup
from typing import Optional
def get_next_url(content: str) -> Optional[str]:
soup = BeautifulSoup(content, "html.parser")
links = [a for a in soup.find_all("a")
if a.get("aria-label")
and "Next" in a.get("aria-label")]
# Terminates recursion if the last page is reached.
if len(links) == 0:
return None
next_url = f"{root_url}{links[0].get('href')}"
return next_url
Optional Post-Download Callback
Provide an optional callback containing the logic to perform after each page download such as saving the content to disk.
def on_after_download(content: str):
with open("some_file.html", "w", encoding="utf-8") as file:
file.write(content)
Start the Download
scroll_scraper.download(first_url, on_after_download, get_next_url)
Scroll and Download Options
Refer to the tq-scroll-scrape documentation for details on controlling scroll and download options. For example, the default wait time between scrolls is two seconds but can be changed. By default, the entire page is scrolled at once but can be scrolled by a specific number of pixels if desired.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file tq_recursive_scroll_scrape-3.0-py3-none-any.whl
.
File metadata
- Download URL: tq_recursive_scroll_scrape-3.0-py3-none-any.whl
- Upload date:
- Size: 4.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.1 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 19fd310026ea5ba74db963158747a20254375c2ee0e44d72d78bce5d9ba85e0c |
|
MD5 | 85e0a843217cfe679bf32eb78d998579 |
|
BLAKE2b-256 | a4e3c80731d05bda4eecd8ad7b605371ad25af10d2fc468a4b5721125ab32c81 |