Skip to main content

Tools for scraping

Project description

scraped

Tools for scraping.

To install: pip install scraped

Showcase of main functionalities

Note that when pip installed, scraped comes with a command line tool of that name. Run this in your terminal:

scraped -h

Output:

usage: tools.py [-h] {markdown-of-site,download-site,scrape-multiple-sites} ...

...

These tools are written in python, so you can use them by importing

from scraped import markdown_of_site, download_site, scrape_multiple_sites

download_site downloads one (by default, depth=1) or several (if you specify a larger depth) pages of a target url, saving them in files of a folder of your (optional) choice.

scrape_multiple_sites can be used to download several sites.

markdown_of_site uses download_site (by default, saving to a temporary folder), then aggregates all the pages into a single markdown string, which it can save for you if you ask for it (by specifying a save_filepath)

Below you'll find more details on these functionalities.

You'll find more useful functions in the code, but the three I mention here are the "top" ones I use most often.

markdown_of_site

Download a site and convert it to markdown.

This can be quite useful when you want to perform some NLP analysis on a site, feed some information to an AI model, or simply want to read the site offline. Markdown offers a happy medium between readability and simplicity, and is supported by many tools and platforms.

Args:

  • url: The URL of the site to download.
  • depth: The maximum depth to follow links.
  • filter_urls: A function to filter URLs to download.
  • save_filepath: The file path where the combined Markdown will be saved.
  • verbosity: The verbosity level.
  • dir_to_save_page_slurps: The directory to save the downloaded pages.
  • extra_kwargs: Extra keyword arguments to pass to the Scrapy spider.

Returns:

  • The Markdown string of the site (if save_filepath is None), otherwise the save_filepath.
>>> markdown_of_site(
...     "https://i2mint.github.io/dol/",
...     depth=2,
...     save_filepath='~/dol_documentation.md'
... )  # doctest: +SKIP
'~/dol_documentation.md'

If you don't specify a save_filepath, the function will return the Markdown string, which you can then analyze directly, and/or store as you wish.

>>> markdown_string = markdown_of_site("https://i2mint.github.io/dol/")  # doctest: +SKIP
>>> print(f"{type(markdown_string).__name__} of length {len(markdown_string)}")  # doctest: +SKIP
str of length 626439

download_site

download_site('http://www.example.com')

will just download the page the url points to, storing it in the default rootdir, which, for example, on unix/mac, is ~/.config/scraped/data, but can be configured through a SCRAPED_DFLT_ROOTDIR environment variable.

The depth argument will enable you to download more content starting from the url:

download_site('http://www.example.com', depth=3)

And there's more arguments:

  • start_url: The URL to start downloading from.
  • url_to_filepath: The function to convert URLs to local filepaths.
  • depth: The maximum depth to follow links.
  • filter_urls: A function to filter URLs to download.
  • mk_missing_dirs: Whether to create missing directories.
  • verbosity: The verbosity level.
  • rootdir: The root directory to save the downloaded files.
  • extra_kwargs: Extra keyword arguments to pass to the Scrapy spider.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scraped-0.0.10.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scraped-0.0.10-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file scraped-0.0.10.tar.gz.

File metadata

  • Download URL: scraped-0.0.10.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for scraped-0.0.10.tar.gz
Algorithm Hash digest
SHA256 7f1d01290e849378de8c824a71b285a97bf6da139fc8b868be34b35cfee7a471
MD5 92d7bd5ecb7e731820a37b469cc9d199
BLAKE2b-256 72b3cfaa11e2ae01c0f54625b6fba3e93565182175dbb056403227cabad6343c

See more details on using hashes here.

File details

Details for the file scraped-0.0.10-py3-none-any.whl.

File metadata

  • Download URL: scraped-0.0.10-py3-none-any.whl
  • Upload date:
  • Size: 13.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for scraped-0.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 2f9fec9283acddaf8b1a2fe1975b79d631b25d9a1beac3b91777aeb985a0d678
MD5 d9e0832474eb5f371006e0d9ffaf4228
BLAKE2b-256 9df7ce35babcbc9b925f7308316920971783ec2b4b9fb79ce346fe2e507115d7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page