Automatically Scrape Websites
Project description
SpiderWebCraler is a library to automatically complete web-scraping related tasks such as: finding element(s) by class or ID, finding elements by XPATH, finding tables, paragraphs, headers, footers, headings, images, etc., finding images’ source, finding and returning images!
03/11/2021
Version 1.0.1 =-=-=-=-=-=-=
-Created SpiderWebCrawler
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
SpiderWebCrawler-0.0.1.tar.gz
(1.6 MB
view hashes)
Built Distribution
Close
Hashes for SpiderWebCrawler-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 06c6cd58b952ec423f198de7c71274e6e260465383d3e66f198ae2aea75b45a7 |
|
MD5 | 87bf4fc9d7221e15ca81918edae5d9e1 |
|
BLAKE2b-256 | a0fc53cf2076aec5f8b327d82f4f2244682c3579b6ddacbcbd2a39e2ab96e851 |