Skip to main content

Trawl web pages for files to download

Project description

Given the url of an html web page, this Python package asynchronously downloads all non-web files linked to from that web page, e.g. audio files, Excel documents, etc. Optionally, all web pages linked to from the original web page can be trawled for files as well.


Python 3 must be installed and in your system PATH. That is, it must be a recognised command for the command line interface. Enter python --version in your command line to see whether you have Python 3 installed.

The Python package manager pip is also required. Check that you have it by running pip --version. It is automatically installed with recent versions of Python, but it can also be installed manually. See the official installation instructions

Installing the web_trawler package

Run the following code in your command line interface (excluding the $, which is just a prompt icon):

$ pip install web_trawler --upgrade

The package has no external dependencies. For testing, pytest is required.

The source code for web_trawler is available on


Command line

Once installed, web_trawler can be used like this:

$ web_trawler

Run this command to see how web_trawler finds links and inspects their http headers for more information. A bunch of logging events will be output to console. There are ordinarily no files linked to from, but if there are, they will be downloaded to the directory download/ relative to where you ran the command.

The url argument is required. In addition, the following optional arguments are supported:

--target TARGET
 Give a path for where you would like the files to be downloaded. The default path is “download”.
 Set web_trawler to trawl pages linked to from the original web page as well (only goes one step, and only for links within the domain of the original web page)
--interactive Short version is “-i”. Asks user about whether or not to trawl each linked page (has no effect unless the –add_links_from_linked_pages flag is set to true.
 Short version is “-I”. Asks user about whether or not to download each of the files found.
--quiet Suppresses output information about which links are being processed and which files are being downloaded.
--processes PROCESSES
 Manually set how many processes will be spawned. The default is to spawn one less than the number of processors detected (so as not to stall the system). For each process, up to 10 threads are spawned.
--whitelist WHITELIST
 Space-separated file endings to whitelist. Allows use of wildcards, e.g. “xls*” to capture all the Excel file extension variants, like xlsx, xlsb, xlsm and xls. A given blacklist takes precedence over the whitelist.
--blacklist BLACKLIST
 Space-separated file endings to blacklist. Works just whitelist, only it excludes files of the given file endings.
--no_of_files_limit LIMIT
 Set a maximum number of files you are willing to download, in case web_trawler finds more than expected.
--mb_per_file_limit LIMIT
 Set a maximum file size you are willing to download. Warnings are logged to console for each file excluded.

Each argument has a shorthand consisting of their first letters, e.g. -t, -a, -q, etc.

A realistic example of use

If we’d like to download, say, all zip and Excel files up to 100 MB from a web page on the World Input-Output Database site, into a local directory called “data”, we’d need to use the arguments -t (for target), -w (for whitelist) and -m (for mb_per_file_limit):

$ web_trawler -t "data" -w "zip xls*" -m 100

Notice the use of a wildcard in the whitelist. The web page specified links to two different Excel associated file endings. The wildcard ensures that both are captured.

If you test this command, downloads of a bunch of large files will start. Press ctrl-c or ctrl-z to interrupt or force quit the process, respectively.

Make sure to clean up any downloaded files you don’t want. They should be in a folder relative to where you ran the command. If you didn’t specify a target, they are downloaded to a directory called “download”.

Use within Python

The following code does the exact same thing as the last example for the command line usage:

import web_trawler

                  add_links_from_linked_pages=True, mb_per_file_limit=0)

The function trawl does the same thing as web_trawler as run from the command line, but with the arguments passed to it directly in Python.

Several of the intermediary functions used in web_trawler can also be accessed through Python, i.e. to get a list with information about all links on a webpage, or just the links to files, filtered with a blacklist or whitelist. Here’s a brief description of each of them:

get_links:Takes only one argument, a url, and returns a list of Link namedtuples, described below. This list is unfiltered. All http links that return a http request are included.
get_file_links:Runs get_links and returns a filtered list of Link namedtuples for files only, with whitelist and/or blacklist applied if specified. Arguments have self-explanatory names. The whitelist and blacklist can be provided as a space-separated string or as a list.

Both get_links and get_file_links return lists of namedtuples with the following fields:

href:the link url
title:the content of the <a> tag containing the link
mb:calculated from the http header content-length
type:the http header content-type, unmodified

Use in Matlab

In Matlab, functions of pip installed Python packages can be called using the py script, where optional arguments are specified using the pyargs function:

>> py.web_trawler.get_file_links('', pyargs('whitelist', 'xls* doc*'))

Stdout isn’t displayed, that’s why the get_file_links function was chosen, as it returns something. To use the full functionality of web_trawler, you could run the function trawl instead. As long as there are no errors, nothing will show up in the Command Window. Files will nevertheless be downloaded, relative to your Current Folder in Matlab.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

web_trawler-0.2.0.tar.gz (23.1 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page