No project description provided
Project description
scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.
scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.
Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:
All of the power of the suberb requests library.
HTTP, HTTPS, and FTP requests via an identical API
support for simple caching with pluggable cache backends
request throttling
configurable retries for non-permanent site failures
Written by James Turk <dev@jamesturk.net>, thanks to Michael Stephens for initial urllib2/httplib2 version
See https://github.com/jamesturk/scrapelib/graphs/contributors for contributors.
Requirements
python >=3.7
requests >= 2.0
Example Usage
Documentation: http://scrapelib.readthedocs.org/en/latest/
import scrapelib s = scrapelib.Scraper(requests_per_minute=10) # Grab Google front page s.get('http://google.com') # Will be throttled to 10 HTTP requests per minute while True: s.get('http://example.com')
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for scrapelib-2.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb3f9fb85833c929454fad0bda65dd8e29a6583ebef6442354eba3d7aa52f8bb |
|
MD5 | 100508101e5be4a3729407a1232663ed |
|
BLAKE2b-256 | db05eb87679d73ed737a8bccb48e28dafc1ea3ea783efa1ccd23715ce9316a0c |