Skip to main content

It's a command-line tool to extract HTML elements using an XPath query or CSS3 selector.

Project description

PyPI version Python Versions

scrape cli

It's a command-line tool to extract HTML elements using an XPath query or CSS3 selector.

It's based on the great and simple scraping tool written by Jeroen Janssens.

Installation

You can install scrape-cli using pip:

Using pipx (recommended for CLI tools)

pipx install scrape-cli

Using pip

pip install scrape-cli

Or install from source:

git clone https://github.com/aborruso/scrape-cli
cd scrape-cli
pip install -e .

Requirements

  • Python >=3.6
  • requests
  • lxml
  • cssselect

How does it work?

Using the Test HTML File

In the resources directory you'll find a test.html file that you can use to test various scraping scenarios. Here are some examples:

  1. Extract all table data:
# CSS
scrape -e "table.data-table td" resources/test.html
# XPath
scrape -e "//table[contains(@class, 'data-table')]//td" resources/test.html
  1. Get all list items:
# CSS
scrape -e "ul.items-list li" resources/test.html
# XPath
scrape -e "//ul[contains(@class, 'items-list')]/li" resources/test.html
  1. Extract specific attributes:
# CSS
scrape -e "a.external-link" -a href resources/test.html
# XPath
scrape -e "//a[contains(@class, 'external-link')]/@href" resources/test.html
  1. Check if an element exists:
# CSS
scrape -e "#main-title" --check-existence resources/test.html
# XPath
scrape -e "//h1[@id='main-title']" --check-existence resources/test.html
  1. Extract nested elements:
# CSS
scrape -e ".nested-elements p" resources/test.html
# XPath
scrape -e "//div[contains(@class, 'nested-elements')]//p" resources/test.html
  1. Get elements with specific attributes:
# CSS
scrape -e "[data-test]" resources/test.html
# XPath
scrape -e "//*[@data-test]" resources/test.html
  1. Additional XPath examples:
# Get all links with href attribute
scrape -e "//a[@href]" resources/test.html

# Get checked input elements
scrape -e "//input[@checked]" resources/test.html

# Get elements with multiple classes
scrape -e "//div[contains(@class, 'class1') and contains(@class, 'class2')]" resources/test.html

# Get text content of specific element
scrape -e "//h1[@id='main-title']/text()" resources/test.html

General Usage Examples

A CSS selector query like this

curl -L 'https://en.wikipedia.org/wiki/List_of_sovereign_states' -s \
| scrape -be 'table.wikitable > tbody > tr > td > b > a'

Note: When using both -b and -e options together, they must be specified in the order -be (body first, then expression). Using -eb will not work correctly.

or an XPATH query like this one:

curl -L 'https://en.wikipedia.org/wiki/List_of_sovereign_states' -s \
| scrape -be '//table[contains(@class, 'wikitable')]/tbody/tr/td/b/a'

gives you back:

<html>
 <head>
 </head>
 <body>
  <a href="/wiki/Afghanistan" title="Afghanistan">
   Afghanistan
  </a>
  <a href="/wiki/Albania" title="Albania">
   Albania
  </a>
  <a href="/wiki/Algeria" title="Algeria">
   Algeria
  </a>
  <a href="/wiki/Andorra" title="Andorra">
   Andorra
  </a>
  <a href="/wiki/Angola" title="Angola">
   Angola
  </a>
  <a href="/wiki/Antigua_and_Barbuda" title="Antigua and Barbuda">
   Antigua and Barbuda
  </a>
  <a href="/wiki/Argentina" title="Argentina">
   Argentina
  </a>
  <a href="/wiki/Armenia" title="Armenia">
   Armenia
  </a>
...
...
 </body>
</html>

Some notes on the commands:

  • -e to set the query
  • -b to add <html>, <head> and <body> tags to the HTML output.

Linux 64 bit precompiled binary

If you are looking for precompiled executables for Linux, please refer to the Releases page on GitHub where you can find the latest precompiled binary file.

I have built the scrape-linux-x86_64 precompiled binary, using pyinstaller and this command: pyinstaller --onefile scrape.py.

Once you have built it, it's an executable, and it's possible to use it Linux 64 bit environment.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrape_cli-1.1.5.tar.gz (6.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrape_cli-1.1.5-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file scrape_cli-1.1.5.tar.gz.

File metadata

  • Download URL: scrape_cli-1.1.5.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.2

File hashes

Hashes for scrape_cli-1.1.5.tar.gz
Algorithm Hash digest
SHA256 000c965344471a912cbe9bc023a5f41a8b157bea2ebd548665e2c907d26b4dfe
MD5 561d273051532e8e13b085489bb67bfb
BLAKE2b-256 c7ae1ab8f099096f238e5f4f01853d2c23c17b3f2066ea9ba294ab485ac29d72

See more details on using hashes here.

File details

Details for the file scrape_cli-1.1.5-py3-none-any.whl.

File metadata

  • Download URL: scrape_cli-1.1.5-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.2

File hashes

Hashes for scrape_cli-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 b2830a520a74a13324e8977231441c20e0ae64db9aba231eb2483e53675f181d
MD5 3660a950ef7db0ba18e4d9b727678620
BLAKE2b-256 7a7d40abe55564b6d0f76d39ef7554ea2e8de4c6e0dfd9a4f7c251032a42c3dd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page