Asynchronous requests in Python without thinking about it.
Project DescriptionRelease History Download Files
Simple-requests allows you to get the performance benefit of asynchronous requests, without needing to use any asynchronous coding paradigms.
from simple_requests import Requests # Creates a session and thread pool requests = Requests() # Sends one simple request; the response is returned synchronously. login_response = requests.one('http://cat-videos.net/login?user=fanatic&password=c4tl0v3r') # Cookies are maintained in this instance of Requests, so subsequent requests # will still be logged-in. profile_urls = [ 'http://cat-videos.net/profile/mookie', 'http://cat-videos.net/profile/kenneth', 'http://cat-videos.net/profile/itchy' ] # Asynchronously send all the requests for profile pages for profile_response in requests.swarm(profile_urls): # Asynchronously send requests for each link found on the profile pages # These requests take precedence over those in the outer loop to minimize overall waiting # Order doesn't matter this time either, so turn that off for a performance gain for friends_response in requests.swarm(profile_response.links, maintainOrder = False): # Do something intelligent with the responses, like using # regex to parse the HTML (see http://stackoverflow.com/a/1732454) friends_response.html
1.1.1 (June 27, 2014)
- API Changes
- bundleParam parameter added to Requests.one, Requests.swarm, Requests.each
1.1.0 (May 01, 2014)
- API Changes
- defaultTimeout parameter added to Requests.__init__
- Bug Fixes
- No more errors / warnings on exit
- Fixes due to API changes in gevent 1.0
- Fixed a couple documentation errors
- Added a patch class, with monkey patches of urllib3 (to reduce the likelihood of too many open connections/files at once) and httplib (to disregard servers that incorrectly report the content-length)