Skip to main content

Numerical optimization framework

Project description

Indago

Indago is a Python 3 module for numerical optimization.

Installation

For easiest install use

pip3 install indago

or if you wish to update your existing Indago installation

pip3 install indago --upgrade

Dependencies

A following packages should be installed using aptitude:

  • python3
  • python3-pip
  • python3-tk
sudo apt install python3 python3-pip python3-tk

After packages installation using above command, additional python packages should be installed using pip from requirements.txt:

pip install -r requirements.txt

Optimization problem setup

The setup of the optimization problem in Indago is the same regardless which optimization agorithm is used.

def evaluation(x):
    obj = np.sum(x ** 2)  # This is the minimization objective
    constr1 = x[0] - x[1]  # This is the constraint x_0 - x_1 <= 0
    constr2 = - np.sum(x)  # This is the constraint sum x_i >= 0
    return obj, constr1, constr2

algorithm = PSO() # Or any other algorithm imported from Indago

# Optimization variables settings
algorithm.dimensions = 10
algorithm.lb = -10 # 1d np.array of lower bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value)
algorithm.ub = 10 + np.range(pso.dimensions) # 1d np.array of upper bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value)

# Objectives and constraints settings
algorithm.objectives = 1  # Number of objectives (optional parameter, default is 1)
algorithm.objective_labels = ['Squared sum minimization']  # Labels for objectives (optional parameter, automatically generated labels if not set)
algorithm.constraints = 2  # Number of constraints (optional parameter, default is 0)
algorithm.constraint_labels = ['Constraint 1', 'Constraint 2']  # Labels for constraints (optional parameter, automatically generated labels if not set)
algorithm.evaluation_function = evaluation  # Set the evaluation function

# Running the optimization
algorithm.optimize()

Algorithms

Indago is a Python module for numerical optimization of real fitness function over a real parameter domain. It was developed at the Department for Fluid Mechanics and Computational Engineering of the University of Rijeka, Faculty of Engineering, by Stefan Ivić, Siniša Družeta, and others.

Indago is developed for in-house research and teaching purposes and is not officially supported in any way, it is not properly documented and probably needs more testing. But hey, we use it, it works for us, and it's free! Anyway, proceed with caution, as you would with any other beta-level software.

As of now, Indago consists of three stochastic swarm-based optimizers, namely Particle Swarm Optimization (PSO), Fireworks Algorithm (FWA) and Squirrel Search Algorithm (SSA). They are all available through the same API, which was designed to be as accessible as possible. Indago relies heavily on NumPy, so the inputs and outputs of the optimizers are mostly NumPy arrays. Besides NumPy and a couple of other stuff here and there (a few SciPy functions and rich module for verbose output), Indago is pure Python. Indago optimizers also include some of our original research improvements, so feel free to try those as well. And don't forget to cite. :)

Particle Swarm Optimization

Using Indago is easy. Let us use PSO as an example. First, we need to import NumPy and Indago PSO, and then initialize an optimizer object:

import numpy as np
from indago.pso import PSO
pso = PSO() 

Then, we must provide a goal function which needs to be minimized, say:

def goalfun(x):	# must take 1d np.array
    return np.sum(x**2) # must return scalar number
pso.evaluation_function = goalfun

Now we can define optimizer inputs:

pso.method = 'Vanilla' # we will use Standard PSO, the other available option is 'TVAC' [1]; default method='Vanilla'
pso.dimensions = 20 # number of variables in the design vector (x)
pso.lb = np.ones(pso.dimensions) * -1 # 1d np.array of lower bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value)
pso.ub = np.ones(pso.dimensions) * 1 # 1d np.array of upper bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value)
pso.iterations = 1000 # default iterations=100*dimensions
pso.maximum_evaluations = 5000 # optional maximum allowed number of function evaluations; when surpassed, optimization is stopped (if reached before pso.iterations are exhausted)
pso.target_fitness = 10**-3 # optional fitness threshold; when reached, optimization is stopped (if it didn't already stop due to exhausted pso.iterations or pso.maximum_evaluations)

Also, need to provide optimization method parameters:

pso.params['swarm_size'] = 15 # number of PSO particles; default swarm_size=dimensions
pso.params['inertia'] = 0.8 # PSO parameter known as inertia weight w (should range from 0.5 to 1.0), the other available options are 'LDIW' (w linearly decreasing from 1.0 to 0.4) and 'anakatabatic'; default inertia=0.72
pso.params['cognitive_rate'] = 1.0 # PSO parameter also known as c1 (should range from 0.0 to 2.0); default cognitive_rate=1.0
pso.params['social_rate'] = 1.0 # PSO parameter also known as c2 (should range from 0.0 to 2.0); default social_rate=1.0

If we want to use our novel adaptive inertia weight technique [2], we invoke it by:

pso.params['inertia'] = 'anakatabatic'

then we need to also specify the anakatabatic model:

pso.params['akb_model'] = 'Languid' # [3,4], other options are 'FlyingStork', 'MessyTie', 'RightwardPeaks', 'OrigamiSnake' [2]

If we want, we can enable reporting during the optimization process by providing the verbosity level argument:

pso.verbose = 1 # the available options are 0, 1 and 2; default verbose=0

Finally, we can start the optimization and get the results:

result = pso.optimize()
min_f = result.f # fitness at minimum, scalar number
x_min = result.X # design vector at minimum, 1d np.array

And that's it!

Fireworks Algorithm

If we want to use FWA [5], we just have to import it instead of PSO:

from indago.fwa import FWA
fwa = FWA()

Now we can proceed in the same manner as with PSO. For FWA, the only method available is basic FWA:

fwa.method = 'Vanilla' # default

In FWA we have to set the following method parameters:

fwa.params['n'] = 20 # default n=dimensions
fwa.params['m1'] = 10 # default m1=dimensions/2
fwa.params['m2'] = 10 # default m2=dimensions/2

Squirrel Search Algorithm

We can try our luck also with SSA [6]. We initialize it like this:

from indago.ssa import SSA
ssa = SSA()

In SSA, the only available method is 'Vanilla' (which is set as default), and there is only one mandatory method parameter:

ssa.params['acorn_tree_attraction'] = 0.6 # ranges from 0.0 to 1.0; default acorn_tree_attraction=0.5

Optionally, we can define a few other SSA parameters:

ssa.params['predator_presence_probability'] = 0.1 # default
ssa.params['gliding_constant'] = 1.9 # default
ssa.params['gliding_distance_limits'] = [0.5, 1.11] # default

Differential Evolution

Lastly, if we want to use DE [7], we initialize it in the same way as with the other methods:

from indago.de import DE
de = DE()

There are two DE methods implemented, namely 'SHADE' and 'LSHADE'. Say we want to use 'LSHADE':

de.method = 'LSHADE' # default method='SHADE'

Both DE methods use the following parameters:

de.params['initial_population_size'] = 200 # default initial_population_size=dimensions*18
de.params['external_archive_size_factor'] = 2.6 # default
de.params['historical_memory_size'] = 4 # default historical_memory_size=6
de.params['p_mutation'] = 0.2 # default p_mutation=0.11

Multiple objectives and constraints handling

The optimization algorithms implemented in Indago are able to consider nonlinear constraints defined as c(x) <= 0. The constraints handling is enabled by the multi-level comparison which is able to contrast a multi-constraint optimization candidates. A minimization multi-objective optimization problems can also be treated in Indago by setting weighted sum fitness and reducing the problem to single-objective.

The following example prepares PSO optimizer for an evaluation which returns two objectives and two constraints:

pso.objectives = 2
pso.objective_labels = ['Route length', 'Passing time']
pso.objective_weights = [0.4, 0.6]
pso.constraints = 2
pso.constraint_labels = ['Obstacles intersection length', 'Curvature limit']

The evaluation function needs to be modified accordingly:

def evaluate(x):
        # Calculate minimization objectives o1 and o2
        # Calculate constraints c1 and c2
        # Constraints are defined as c1 <= 0 and c2 <= 0
        return o1, o2, c1, c2

Stopping criteria

Three criteria can be enabled for stopping the Indago optimization:

  • Stop if maximum number of iterations (optimizer.iterations) is reached,
  • Stop if maximum number of evaluations (optimizer.maximum_evaluations) is reached and
  • Stop if target fitness (optimizer.target_fitness) is reached.

The optimization stops when one of specified criteria is reached. The maximum number of iterations is mandatory stopping condition. If not set its automatically calculated as iterations = 100 * dimensions. Maximum number of evaluations and target fitness criteria are used only if they are specified.

Optimization monitoring

Three different modes of optimization monitoring can be used, by specifiying the parameter optimizer.monitoring:

  • 'none' - no output is displayed (this is the default behaviour),
  • 'basic' - a convergence can be monitored by one line output per iteration and
  • 'dashboard' - a dasborad shows live values of the most important parameters for tracking optimization convergence.

Parallel evaluation

Indago is able to evaluate a group of candidates (e.g. swarm in PSO) in parallel mode. This is especially useful for expensive (in terms of computational time) engineering problems which evaluation relies on simulations such as CFD or FEM.

Indago utilizes the multiprocessing module for parallelization and it can be enabled by specifying the number_of_processes parameter available for each optimizer:

pso = PSO()
pso.number_of_processes = 4 # use 'maximum' for employing all available processors/cores

Note that it scales well only on relatively slow goal functions. Also keep in mind that Python multiprocessing sometimes does not work when initiated from imported code, so you need to have the optimization run call wrapped in if __name__ == '__main__':.

When dealing with simulations, one mostly needs to specify input files and a directory in which the simulation runs. If execution is parallel, these file/directory names need to be unique to avoid possible conflicts in simulation files. In order to facilitate this, Indago offers the option of passing a unique string to evaluation function which enables execution of simulations without possibility of conflicts.

To enable passing of a unique string to evaluation function, set forward_unique_str to True:

pso.forward_unique_str = True

Additionaly, the evaluation function needs another argument trough which a unique string is received:

def evaluation(X, unique_str=None):
    # Prepare a simulation case in a new file and/or a new directory whose names are based on unique_str.
    # Run simulation and extract results
    return objective

Results and convergence plot

Some intermediate optimization results are stored in optimizer.results which can be explored/analyzed after the optimization if finished.

There is also an utility function available for visualization of optimization convergence which plots the convergence for all defined objectives and constraints:

pso.results.plot_convergence()

CEC 2014

Among other stuff, Indago also includes the CEC 2014 test suite [8], comprising 30 test functions for real optimization methods. You can use it by importing it like this:

from indago.benchmarks import CEC2014

Then, you have to initialize it for a specific dimensionality of the test functions:

test = CEC2014(20) # initialization od 20-dimension functions, you can also use 10, 50 and 100

Now you can use specific test functions (test.F1, test.F2, ...up to test.F30), they all take 1d np.array of size 10/20/50/100 and return a scalar number. Alternatively, you can iterate through the built-in list of them all:

test_results = []
for f in test.functions:
    optimizer.evaluation_function = f
    test_results.append(optimizer.optimize().f)

Have fun!

References:

  1. Ratnaweera, A., Halgamuge, S. K., & Watson, H. C. (2004). Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on evolutionary computation, 8(3), 240-255.

  2. Družeta, S., & Ivić, S. (2020). Anakatabatic Inertia: Particle-wise Adaptive Inertia for PSO, arXiv:2008.00979 [cs.NE].

  3. Družeta, S., & Ivić, S. (2017). Examination of benefits of personal fitness improvement dependent inertia for Particle Swarm Optimization. Soft Computing, 21(12), 3387-3400.

  4. Družeta, S., Ivić, S., Grbčić, L., & Lučin, I. (2019). Introducing languid particle dynamics to a selection of PSO variants. Egyptian Informatics Journal, 21(2), 119-129.

  5. Tan, Y., & Zhu, Y. (2010, June). Fireworks algorithm for optimization. In International conference in swarm intelligence (pp. 355-364). Springer, Berlin, Heidelberg.

  6. Jain, M., Singh, V., & Rani, A. (2019). A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm and evolutionary computation, 44, 148-175.

  7. Tanabe, R., & Fukunaga, A. S. (2014). Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 1658–1665, Beijing, China.

  8. Liang, J. J., Qu, B. Y., & Suganthan, P. N. (2013). Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, 635.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Indago-0.1.9.tar.gz (35.0 kB view hashes)

Uploaded Source

Built Distribution

Indago-0.1.9-py3-none-any.whl (41.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page