Skip to main content

Parallel function mapping using mpi4py

Project description


This package is a wrapper for mpi4py to allow for the easy running of functions in parallel on a single computer or on multiple nodes of a HPC cluster.

The code used to implement the mpimap methods will also function when no mpi environment is used, or only a single processor is specified.


Once imported, create an instance of the Mpimap class:

mpi = mpimap.Mpimap()

To have each mpi process print its information, use:

At this point, all mpi instance still continue to process all lines within the being run or command sent to the interpreter. To put all worker nodes into a "listening" state where they only accept commands sent from the head process, use:


From this point, command in the running or command sent to the interpreter will only be processed by the head process. To determined the status of each worker process after it has been started, use:


To kill all worker nodes once finished, use:



To run code on each of the worker nodes once they have being "listening" for jobs, include the code in a function with no arguments and use:

Mpimap include a map() function which behaves as the builtin version included with python:

output =, args)

This will send a copy of the function to all worker nodes, and then queue the args list, sending values to each node not currently running a job. The input order is maintained by the output.

Mpimap also includes the function gmap(). This is a special instance of map() that is intended for running groups of jobs where an argument returning a "failed" state results in all jobs within that group being canceled:

output = mpi.gmap(func, args, groupind=0, failstate=None)

For this function, args is a list of lists. The argument groupind determines which entry in each list run by the function is used to determine that jobs group. The argument failstate is the value checked to determine if the job was a success or failure.

An addition static function is included called gmatrix. This can be used to generate a list of all possible combinations of two lists, include an input id number and append constants to all combinations:

x = [1, 2, 3]
y = [10, 20]
constants = ('a', 'b')
out = mpi.gmatrix(x, y, *constants)

This example would return:

>>> out
[('0', 1, 10, 'a', 'b'),
('1', 1, 20, 'a', 'b'),
('2', 2, 10, 'a', 'b'),
('3', 2, 20, 'a', 'b'),
('4', 3, 10, 'a', 'b'),
('5', 3, 20, 'a', 'b')]


To test the provided functions and check the difference in processing time between and, run the example python script:

mpirun -n <Number of processors you want to use> python

The full working example is given here:

import mpimap
import time

def func_cheap(*args):
	"""Do nothing"""

def func_expensive(n):
	"""Basic factorising problem"""
	factors = set([])
	for i in xrange(n - 1):
		i = i + 1
		# Skip factors
		if i in factors:
		# Find factors
		if n % i == 0:
			factors.add(n / i)

	return sorted(factors)

# Build mpi
mpi = mpimap.Mpimap(sleep=0, debug=False)

# Run function on all nodes

# Set up function and arguments
args = range(5000, 10000)

# Not in parallel
t0 = time.time()
res = map(func_expensive, args)
dt = time.time() - t0
print '\nNon Parallel: {}'.format(dt)

# Parallel
t0 = time.time()
res =, args)
dt = time.time() - t0
print '\nParallel: {}\n'.format(dt)


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for mpimap, version 1.0.4
Filename, size File type Python version Upload date Hashes
Filename, size mpimap-1.0.4-py2-none-any.whl (5.2 kB) File type Wheel Python version py2 Upload date Hashes View
Filename, size mpimap-1.0.4.tar.gz (5.0 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page