Basic Web Scraper made with selenium and bs4
Basic Web Scraper
This package can be used for simple automated web surfing / scraping.
Create classes inheriting the included BasicSpider class to create custom behavior to suit your requirements.
from basic_web_scraper.BasicSpider import BasicSpider class CustomSpider(BasicSpider): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def custom_operation(self, threshold): """ Scroll to predefined threshold. If past threshold, scroll back up. """ if self.get_page_y_offset() < threshold: self.mousewheel_vscroll(number_of_scrolls=2) else: y_difference = self.get_page_y_offset() - threshold self.smooth_vscroll_up_by(y_difference)
In order for the package to work, you must include a
geckodriver.exe in your local project directory. Otherwise a
GeckoNotFoundException will be raised
Note: This package currently only supports working with The Firefox geckodriver, which can be downloaded from here
Use this as the superclass for your own project's spider
This Spider allows you to do basic things like goto a url, scroll down the page in different ways, refresh the page, etc..
It acts as an interface to selenium.webdriver to make setting up a project easier
More docs will be added in the future.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for basic_web_scraper-0.13.7-py3-none-any.whl