Download URLs using a compressed disk cache and a random throttling interval.
Project description
Each Downloader maintains an sqlite3-based disk cache that utilizes zlib compression. Network requests are only made if the cached version of the resource has an age larger or equal to the stale_after value provided by the programmer.
Between network requests a throttling interval needs to elapse. This throttling interval is randomly chosen, but lies within the throttle_bounds defined by the programmer.
HTML resources can be parsed using lxml and in this case an lxml ElementTree is returned instead of a file object, with the links rewritten to be absolute in order to facilitate following them. The parsing is done leniently in order to not fail when invalid HTML is encountered.
The programmer can also supply a function that decides whether the server has banned the client (possibly by examining the returned resource). In this case an exception will be raised.
Downloader’s features make it ideal for writing scrapers, as it can keep its network footprint small (due to the cache) and irregular (due to the random throttling interval).
To install, simply run:
python setup.py install
For documentation, after installing, run:
python -m pydoc downloader
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file downloader-py3-1.0.1.tar.gz
.
File metadata
- Download URL: downloader-py3-1.0.1.tar.gz
- Upload date:
- Size: 7.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 599f18024a47075901ccc4408dc6d68d99f62ef915d62de2d528f3201a239b63 |
|
MD5 | d731b1024d98c17074e127bb4c5a7e17 |
|
BLAKE2b-256 | 2ce53101550fbb279c9751d50470c8f6f0cab00c5afb1f8af4031ff7e89e2f98 |