Skip to main content

dude uncomplicated data extraction (For Pyto on iOS)

Project description

License License Version Version
Github Actions Github Actions Coverage CodeCov
Supported versions Python Versions Wheel Wheel
Status Status Downloads Downloads
All Contributors All Contributors

dude uncomplicated data extraction (For Pyto on iOS)

dude_pyto is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. dude_pyto has an easy-to-learn syntax.

๐Ÿšจ dude_pyto is currently in Pre-Alpha. Please expect breaking changes.

Special Version for Pyto

This branch makes Braveblock an optional dependency for use with Pyto on iOS.

Pyto, and other similar iOS apps, do not support the compilation of code after the app has been approved.

So, the Rust-based code of Braveblock will not be downloaded through Pyto.

Please visit roniemartinez/dude for the original repository.

Installation

To install, simply run the following from terminal.

pip install pydude
playwright install  # Install playwright binaries for Chrome, Firefox and Webkit.

Minimal web scraper

The simplest web scraper will look like this:

from dude_pyto import select


@select(css="a")
def get_link(element):
    return {"url": element.get_attribute("href")}

The example above will get all the hyperlink elements in a page and calls the handler function get_link() for each element.

How to run the scraper

You can run your scraper from terminal/shell/command-line by supplying URLs, the output filename of your choice and the paths to your python scripts to dude_pyto scrape command.

dude_pyto scrape --url "<url>" --output data.json path/to/script.py

The output in data.json should contain the actual URL and the metadata prepended with underscore.

[
  {
    "_page_number": 1,
    "_page_url": "https://dude.ron.sh/",
    "_group_id": 4502003824,
    "_group_index": 0,
    "_element_index": 0,
    "url": "/url-1.html"
  },
  {
    "_page_number": 1,
    "_page_url": "https://dude.ron.sh/",
    "_group_id": 4502003824,
    "_group_index": 0,
    "_element_index": 1,
    "url": "/url-2.html"
  },
  {
    "_page_number": 1,
    "_page_url": "https://dude.ron.sh/",
    "_group_id": 4502003824,
    "_group_index": 0,
    "_element_index": 2,
    "url": "/url-3.html"
  }
]

Changing the output to --output data.csv should result in the following CSV content.

data.csv

Features

  • Simple Flask-inspired design - build a scraper with decorators.
  • Uses Playwright API - run your scraper in Chrome, Firefox and Webkit and leverage Playwright's powerful selector engine supporting CSS, XPath, text, regex, etc.
  • Data grouping - group related results.
  • URL pattern matching - run functions on matched URLs.
  • Priority - reorder functions based on priority.
  • Setup function - enable setup steps (clicking dialogs or login).
  • Navigate function - enable navigation steps to move to other pages.
  • Custom storage - option to save data to other formats or database.
  • Async support - write async handlers.
  • Option to use other parser backends aside from Playwright.
  • Option to follow all links indefinitely (Crawler/Spider).
  • Events - attach functions to startup, pre-setup, post-setup and shutdown events.
  • Option to save data on every page.

Supported Parser Backends

By default, dude_pyto uses Playwright but gives you an option to use parser backends that you are familiar with. It is possible to use parser backends like BeautifulSoup4, Parsel, lxml, Pyppeteer, and Selenium.

Here is the summary of features supported by each parser backend.

Parser Backend Supports
Sync?
Supports
Async?
Selectors Setup
Handler
Navigate
Handler
CSS XPath Text Regex
Playwright โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ…
BeautifulSoup4 โœ… โœ… โœ… ๐Ÿšซ ๐Ÿšซ ๐Ÿšซ ๐Ÿšซ ๐Ÿšซ
Parsel โœ… โœ… โœ… โœ… โœ… โœ… ๐Ÿšซ ๐Ÿšซ
lxml โœ… โœ… โœ… โœ… โœ… โœ… ๐Ÿšซ ๐Ÿšซ
Pyppeteer ๐Ÿšซ โœ… โœ… โœ… โœ… ๐Ÿšซ โœ… โœ…
Selenium โœ… โœ… โœ… โœ… โœ… ๐Ÿšซ โœ… โœ…

Using the Docker image

Pull the docker image using the following command.

docker pull roniemartinez/dude

Assuming that script.py exist in the current directory, run Dude using the following command.

docker run -it --rm -v "$PWD":/code roniemartinez/dude dude scrape --url <url> script.py

Documentation

Read the complete documentation at https://roniemartinez.github.io/dude/. All the advanced and useful features are documented there.

Requirements

  • โœ… Any dude_pyto should know how to work with selectors (CSS or XPath).
  • โœ… Familiarity with any backends that you love (see Supported Parser Backends)
  • โœ… Python decorators... you'll live, dude!

Why name this project "dude"?

  • โœ… A Recursive acronym looks nice.
  • โœ… Adding "uncomplicated" (like ufw) into the name says it is a very simple framework.
  • โœ… Puns! I also think that if you want to do web scraping, there's probably some random dude_pyto around the corner who can make it very easy for you to start with it. ๐Ÿ˜Š

Author

Ronie Martinez

Contributors โœจ

Thanks goes to these wonderful people (emoji key):


Ronie Martinez

๐Ÿšง ๐Ÿ’ป ๐Ÿ“– ๐Ÿš‡

This project follows the all-contributors specification. Contributions of any kind welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydude_pyto-0.22.0.tar.gz (36.1 kB view hashes)

Uploaded Source

Built Distribution

pydude_pyto-0.22.0-py3-none-any.whl (44.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page