Skip to main content

Parallel function mapping using mpi4py

Project description

Mpimap

This package is a wrapper for mpi4py to allow for the easy running of functions in parallel on a single computer or on multiple nodes of a HPC cluster.

The code used to implement the mpimap methods will also function when no mpi environment is used, or only a single processor is specified.

Setup

Once imported, create an instance of the Mpimap class:

mpi = mpimap.Mpimap()

To have each mpi process print its information, use:

mpi.info()

At this point, all mpi instance still continue to process all lines within the script.py being run or command sent to the interpreter. To put all worker nodes into a "listening" state where they only accept commands sent from the head process, use:

mpi.start()

From this point, command in the script.py running or command sent to the interpreter will only be processed by the head process. To determined the status of each worker process after it has been started, use:

mpi.status()

To kill all worker nodes once finished, use:

mpi.stop()

Functions

To run code on each of the worker nodes once they have being "listening" for jobs, include the code in a function with no arguments and use:

mpi.run(func)

Mpimap include a map() function which behaves as the builtin version included with python:

output = mpi.map(func, args)

This will send a copy of the function to all worker nodes, and then queue the args list, sending values to each node not currently running a job. The input order is maintained by the output.

Mpimap also includes the function gmap(). This is a special instance of map() that is intended for running groups of jobs where an argument returning a "failed" state results in all jobs within that group being canceled:

output = mpi.gmap(func, args, groupind=0, failstate=None)

For this function, args is a list of lists. The argument groupind determines which entry in each list run by the function is used to determine that jobs group. The argument failstate is the value checked to determine if the job was a success or failure.

An addition static function is included called gmatrix. This can be used to generate a list of all possible combinations of two lists, include an input id number and append constants to all combinations:

x = [1, 2, 3]
y = [10, 20]
constants = ('a', 'b')
out = mpi.gmatrix(x, y, *constants)

This example would return:

>>> out
[('0', 1, 10, 'a', 'b'),
('1', 1, 20, 'a', 'b'),
('2', 2, 10, 'a', 'b'),
('3', 2, 20, 'a', 'b'),
('4', 3, 10, 'a', 'b'),
('5', 3, 20, 'a', 'b')]

Example

To test the provided functions and check the difference in processing time between builtin.map() and mpimap.map(), run the example python script:

mpirun -n <Number of processors you want to use> python example.py

The full working example is given here:

import mpimap
import time

def func_cheap(*args):
	"""Do nothing"""
	return


def func_expensive(n):
	"""Basic factorising problem"""
	factors = set([])
	for i in xrange(n - 1):
		i = i + 1
		# Skip factors
		if i in factors:
			continue
		# Find factors
		if n % i == 0:
			factors.add(i)
			factors.add(n / i)

	return sorted(factors)


# Build mpi
mpi = mpimap.Mpimap(sleep=0, debug=False)
mpi.info()
mpi.start()

# Run function on all nodes
mpi.run(func_cheap)

# Set up function and arguments
args = range(5000, 10000)

# Not in parallel
t0 = time.time()
res = map(func_expensive, args)
dt = time.time() - t0
print '\nNon Parallel: {}'.format(dt)

# Parallel
t0 = time.time()
res = mpi.map(func_expensive, args)
dt = time.time() - t0
print '\nParallel: {}\n'.format(dt)

mpi.stop()
mpi.exit()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
mpimap-1.0.4-py2-none-any.whl (5.2 kB) Copy SHA256 hash SHA256 Wheel py2
mpimap-1.0.4.tar.gz (5.0 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page