Skip to main content

A high-level cross-protocol url-grabber

Project description

A high-level cross-protocol url-grabber.

Using urlgrabber, data can be fetched in three basic ways:

urlgrab(url) copy the file to the local filesystem
urlopen(url) open the remote file and return a file object
(like urllib2.urlopen)
urlread(url) return the contents of the file as a string

When using these functions (or methods), urlgrabber supports the
following features:

* identical behavior for http://, ftp://, and file:// urls
* http keepalive - faster downloads of many files by using
only a single connection
* byte ranges - fetch only a portion of the file
* reget - for a urlgrab, resume a partial download
* progress meters - the ability to report download progress
automatically, even when using urlopen!
* throttling - restrict bandwidth usage
* retries - automatically retry a download if it fails. The
number of retries and failure types are configurable.
* authenticated server access for http and ftp
* proxy support - support for authenticated http and ftp proxies
* mirror groups - treat a list of mirrors as a single source,
automatically switching mirrors if there is a failure.

Project details


Release history Release notifications

This version
History Node

3.10.2

History Node

3.9.1

History Node

3.1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
urlgrabber-3.10.2.tar.gz (84.6 kB) Copy SHA256 hash SHA256 Source None Feb 8, 2017

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page