Skip to main content

Tools for scraping

Project description

scraped

Tools for scraping.

To install: pip install scraped

Showcase of main functionalities

Note that when pip installed, scraped comes with a command line tool of that name. Run this in your terminal:

scraped -h

Output:

usage: tools.py [-h] {markdown-of-site,download-site,scrape-multiple-sites} ...

...

These tools are written in python, so you can use them by importing

from scraped import markdown_of_site, download_site, scrape_multiple_sites

download_site downloads one (by default, depth=1) or several (if you specify a larger depth) pages of a target url, saving them in files of a folder of your (optional) choice.

scrape_multiple_sites can be used to download several sites.

markdown_of_site uses download_site (by default, saving to a temporary folder), then aggregates all the pages into a single markdown string, which it can save for you if you ask for it (by specifying a save_filepath)

Below you'll find more details on these functionalities.

You'll find more useful functions in the code, but the three I mention here are the "top" ones I use most often.

markdown_of_site

Download a site and convert it to markdown.

This can be quite useful when you want to perform some NLP analysis on a site, feed some information to an AI model, or simply want to read the site offline. Markdown offers a happy medium between readability and simplicity, and is supported by many tools and platforms.

Args:

  • url: The URL of the site to download.
  • depth: The maximum depth to follow links.
  • filter_urls: A function to filter URLs to download.
  • save_filepath: The file path where the combined Markdown will be saved.
  • verbosity: The verbosity level.
  • dir_to_save_page_slurps: The directory to save the downloaded pages.
  • extra_kwargs: Extra keyword arguments to pass to the Scrapy spider.

Returns:

  • The Markdown string of the site (if save_filepath is None), otherwise the save_filepath.
>>> markdown_of_site(
...     "https://i2mint.github.io/dol/",
...     depth=2,
...     save_filepath='~/dol_documentation.md'
... )  # doctest: +SKIP
'~/dol_documentation.md'

If you don't specify a save_filepath, the function will return the Markdown string, which you can then analyze directly, and/or store as you wish.

>>> markdown_string = markdown_of_site("https://i2mint.github.io/dol/")  # doctest: +SKIP
>>> print(f"{type(markdown_string).__name__} of length {len(markdown_string)}")  # doctest: +SKIP
str of length 626439

download_site

download_site('http://www.example.com')

will just download the page the url points to, storing it in the default rootdir, which, for example, on unix/mac, is ~/.config/scraped/data, but can be configured through a SCRAPED_DFLT_ROOTDIR environment variable.

The depth argument will enable you to download more content starting from the url:

download_site('http://www.example.com', depth=3)

And there's more arguments:

  • start_url: The URL to start downloading from.
  • url_to_filepath: The function to convert URLs to local filepaths.
  • depth: The maximum depth to follow links.
  • filter_urls: A function to filter URLs to download.
  • mk_missing_dirs: Whether to create missing directories.
  • verbosity: The verbosity level.
  • rootdir: The root directory to save the downloaded files.
  • extra_kwargs: Extra keyword arguments to pass to the Scrapy spider.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scraped-0.0.8.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scraped-0.0.8-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file scraped-0.0.8.tar.gz.

File metadata

  • Download URL: scraped-0.0.8.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.13

File hashes

Hashes for scraped-0.0.8.tar.gz
Algorithm Hash digest
SHA256 e580384d46d0235cd7133987d581c50423854eaac507c28d40153b513440dd9b
MD5 41f95aa3800ee6a20519a3b875a95e69
BLAKE2b-256 79bbd63047dfca5f8bc008f54808e8004adf8ba1e9b6dc2e7d7a4b69a13f309b

See more details on using hashes here.

File details

Details for the file scraped-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: scraped-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 13.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.13

File hashes

Hashes for scraped-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 31df649d012c534722d6bfb8bd31d3e260fc6053f815c505474c5a9d96be2597
MD5 2485a3bc09fa78d7f9c65d50e2a00178
BLAKE2b-256 ae52f8a90b2d84fccd0b9dc975397d502cb376219da2eb444e4d3ec26677fd57

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page