quickly extract links from html
Project description
Fast Link Extractor
Project under active deveopment
A Python 3.7+ package to extract links from a webpage. Asyncronous functions allows the code to run fast when extracting from many sub-directories.
A use case for this tool is to extract download links for use with wget
or fsspec
.
Main base-level functions
.link_extractor()
: extract links from a given URL.filter_with_regex()
: allows you to filter output with a regular expression.prepend_with_baseurl()
: allows the original URL to be pre-pended to each output
Installation
PyPi
pip install fast-link-extractor
Example
Simply import the package and call link_extractor()
. This will output of list of extracted links
import fast-link-extractor as fle
# url to extract links from
base_url = "https://www.ncei.noaa.gov/data/sea-surface-temperature-optimum-interpolation/v2.1/access/avhrr/"
# extract all links from sub directories ending with .nc
# this may take ~10 seconds, there are a lot of sub-directories
links = fle.link_extractor(base_url,
search_subs=True,
regex='.nc$')
ToDo
- more tests: need more tests
- documentation: need to setup documentation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for fast_link_extractor-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6a21047da3d60477bba79c47390399b283f24dfcbb95e94861174e03b1f8caae |
|
MD5 | 8cdf32707154005f2780de6e8111cc7c |
|
BLAKE2b-256 | 81400ec820613d73bd40428197de4bc3cf099e3ff9ac25ed2c32f3ba0956afcd |