Skip to main content

A simple interface for fetching URL resources in parallel without threads

Project description

This module provides an easy-to-use interface to allow you to run multiple CURL url fetches in parallel in Python, without threads.

To test it, go to the command line, cd to this folder and run

./test.py

This should run 100 searches through Google’s API, printing the results. To see what sort of performance difference running parallel requests gets you, try altering the default of 10 requests running in parallel using the optional script argument, and timing how long each takes:

time ./test.py 1 time ./test.py 20

The first only allows one request to run at once, serializing the calls. I see this taking around 100 seconds. The second run has 20 in flight at a time, and takes 11 seconds! Be warned though, it’s possible to overwhelm your target if you fire too many requests at once. You may end up with your IP banned from accessing that server, or hit other API limits.

The class is designed to make it easy to run multiple curl requests in parallel, rather than waiting for each one to finish before starting the next. Under the hood it uses curl_multi_exec but since I find that interface painfully confusing, I wanted one that corresponded to the tasks that I wanted to run.

To use it, easy_install pycurl, import pyparallelcurl, then create the ParallelCurl object:

parallelcurl = ParallelCurl(10)

The first argument to the constructor is the maximum number of outstanding fetches to allow before blocking to wait for one to finish. You can change this later using setmaxrequests() The second optional argument is an array of curl options in the format used by curl_setopt_array()

Next, start a URL fetch:

parallelcurl.startRequest(’http://example.com’, on_request_done, {‘somekey’:’somevalue’})

The first argument is the address that should be fetched The second is the callback function that will be run once the request is done The third is a ‘cookie’, that can contain arbitrary data to be passed to the callback

This startRequest call will return immediately, as long as less than the maximum number of requests are outstanding. Once the request is done, the callback function will be called, eg:

on_request_done(content, ‘http://example.com’, ch, {‘somekey’:’somevalue’})

The callback should take four arguments. The first is a string containing the content found at the URL. The second is the original URL requested, the third is the curl handle of the request that can be queried to get the results, and the fourth is the arbitrary ‘cookie’ value that you associated with this object. This cookie contains user-defined data.

Since you may have requests outstanding at the end of your script, you MUST call

parallelcurl.finishallrequests()

before you exit. If you don’t, the final requests may be left unprocessed! This is actually also called in the destructor of the class, but it’s definitely best practice to call this explictly.

By Pete Warden <pete@petewarden.com>, freely reusable, see http://petewarden.typepad.com for more

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyparallelcurl-0.0.4-py2.6.egg (10.7 kB view details)

Uploaded Egg

File details

Details for the file pyparallelcurl-0.0.4-py2.6.egg.

File metadata

File hashes

Hashes for pyparallelcurl-0.0.4-py2.6.egg
Algorithm Hash digest
SHA256 92653b2830d893966fdf366935624be89a94382b63aad3c8b5d5cf377102e7e6
MD5 547135676f01e83338fb36dd78765360
BLAKE2b-256 982178811ee75e6856f2b4fbce2c7e3c31ce8ca488390e22b03da8370892556d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page