dude uncomplicated data extraction (For Pyto on iOS)
Project description
License | Version | ||
Github Actions | Coverage | ||
Supported versions | Wheel | ||
Status | Downloads | ||
All Contributors |
dude uncomplicated data extraction (For Pyto on iOS)
dude_pyto is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. dude_pyto has an easy-to-learn syntax.
๐จ dude_pyto is currently in Pre-Alpha. Please expect breaking changes.
Special Version for Pyto
This branch makes Braveblock an optional dependency for use with Pyto on iOS.
Pyto, and other similar iOS apps, do not support the compilation of code after the app has been approved.
So, the Rust-based code of Braveblock will not be downloaded through Pyto.
Please visit roniemartinez/dude for the original repository.
Installation
To install, simply run the following from terminal.
pip install pydude
playwright install # Install playwright binaries for Chrome, Firefox and Webkit.
Minimal web scraper
The simplest web scraper will look like this:
from dude_pyto import select
@select(css="a")
def get_link(element):
return {"url": element.get_attribute("href")}
The example above will get all the hyperlink elements in a page and calls the handler function get_link()
for each element.
How to run the scraper
You can run your scraper from terminal/shell/command-line by supplying URLs, the output filename of your choice and the paths to your python scripts to dude_pyto scrape
command.
dude_pyto scrape --url "<url>" --output data.json path/to/script.py
The output in data.json
should contain the actual URL and the metadata prepended with underscore.
[
{
"_page_number": 1,
"_page_url": "https://dude.ron.sh/",
"_group_id": 4502003824,
"_group_index": 0,
"_element_index": 0,
"url": "/url-1.html"
},
{
"_page_number": 1,
"_page_url": "https://dude.ron.sh/",
"_group_id": 4502003824,
"_group_index": 0,
"_element_index": 1,
"url": "/url-2.html"
},
{
"_page_number": 1,
"_page_url": "https://dude.ron.sh/",
"_group_id": 4502003824,
"_group_index": 0,
"_element_index": 2,
"url": "/url-3.html"
}
]
Changing the output to --output data.csv
should result in the following CSV content.
Features
- Simple Flask-inspired design - build a scraper with decorators.
- Uses Playwright API - run your scraper in Chrome, Firefox and Webkit and leverage Playwright's powerful selector engine supporting CSS, XPath, text, regex, etc.
- Data grouping - group related results.
- URL pattern matching - run functions on matched URLs.
- Priority - reorder functions based on priority.
- Setup function - enable setup steps (clicking dialogs or login).
- Navigate function - enable navigation steps to move to other pages.
- Custom storage - option to save data to other formats or database.
- Async support - write async handlers.
- Option to use other parser backends aside from Playwright.
- BeautifulSoup4 -
pip install pydude[bs4]
- Parsel -
pip install pydude[parsel]
- lxml -
pip install pydude[lxml]
- Pyppeteer -
pip install pydude[pyppeteer]
- Selenium -
pip install pydude[selenium]
- BeautifulSoup4 -
- Option to follow all links indefinitely (Crawler/Spider).
- Events - attach functions to startup, pre-setup, post-setup and shutdown events.
- Option to save data on every page.
Supported Parser Backends
By default, dude_pyto uses Playwright but gives you an option to use parser backends that you are familiar with. It is possible to use parser backends like BeautifulSoup4, Parsel, lxml, Pyppeteer, and Selenium.
Here is the summary of features supported by each parser backend.
Parser Backend | Supports Sync? |
Supports Async? |
Selectors | Setup Handler |
Navigate Handler |
|||
CSS | XPath | Text | Regex | |||||
Playwright | โ | โ | โ | โ | โ | โ | โ | โ |
BeautifulSoup4 | โ | โ | โ | ๐ซ | ๐ซ | ๐ซ | ๐ซ | ๐ซ |
Parsel | โ | โ | โ | โ | โ | โ | ๐ซ | ๐ซ |
lxml | โ | โ | โ | โ | โ | โ | ๐ซ | ๐ซ |
Pyppeteer | ๐ซ | โ | โ | โ | โ | ๐ซ | โ | โ |
Selenium | โ | โ | โ | โ | โ | ๐ซ | โ | โ |
Using the Docker image
Pull the docker image using the following command.
docker pull roniemartinez/dude
Assuming that script.py
exist in the current directory, run Dude using the following command.
docker run -it --rm -v "$PWD":/code roniemartinez/dude dude scrape --url <url> script.py
Documentation
Read the complete documentation at https://roniemartinez.github.io/dude/. All the advanced and useful features are documented there.
Requirements
- โ Any dude_pyto should know how to work with selectors (CSS or XPath).
- โ Familiarity with any backends that you love (see Supported Parser Backends)
- โ Python decorators... you'll live, dude!
Why name this project "dude"?
- โ A Recursive acronym looks nice.
- โ
Adding "uncomplicated" (like
ufw
) into the name says it is a very simple framework. - โ Puns! I also think that if you want to do web scraping, there's probably some random dude_pyto around the corner who can make it very easy for you to start with it. ๐
Author
Contributors โจ
Thanks goes to these wonderful people (emoji key):
Ronie Martinez ๐ง ๐ป ๐ ๐ |
This project follows the all-contributors specification. Contributions of any kind welcome!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pydude_pyto-0.22.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4b3925e187b6f5fae19ed786ab704798703d3a84fd67e13f91eff9c08a121f67 |
|
MD5 | c5696c40f6dce432d87a5ee6efa46f3e |
|
BLAKE2b-256 | 80c4e441db65239b5d50399f9b9ba23fffde6b23d7254817b3c117cea86bc674 |