An implementation of the Differential Evolution algorithm.

## Project description

# DifferentialEvolution

This module is an implementation of the Differential Evolution (DE) algorithm. Proposed by Price and Storn in a series of papers [1, 2, 3], the Differential Evolution is a along-established evolutionary algorithm that aims to optimize functions on a continuous domain.

The Differential Evolution requires no qualitative nor quantitative information about the objective function (such as continuity, differentiability, etc). The algorithm is useful and recommended to deal with black-boxes or functions with many local optima that might trick other algorithms.

The idea behind the algorithm is to construct and improve populations (sets of points on the domain) iteration after iteration in a way that those points will reach the optima.

This implementation assumes that the domain is R^n and that the objective function is real. There are several versions of the DE algorithm, this implementation uses the DE/rand/1/bin (see 4). The following code illustrates the basic usage (also see toy.py):

import numpy as np from DE import DifferentialEvolution as DE ''' Simplest possible use ''' # define a function from R^n to R to minimize def f(x): return np.dot(x, x) # set the dimension of the search space n = 2 # init a DifferentialEvolution instance optimizer = DE(f, n, iterations=1000, seed=1) # runs the optimization optimizer.run() # then you can save your results optimizer.write_results('results.json') # or just show then on the screen print(optimizer.get_results())

Returns at results.json:

{ "x": [ -0.00018860256872277426, 0.10069513518651194 ], "fx": 0.010139545821158842, "count_general_enhances": 0.1735, "count_best_enhances": 0.242 }

Where:

is the optima;**x:**is the value of f(x);**fx:**returns the mean number of enhancements per individual per generation. Its a good metric to know if the algorithm is improving the solution or not. If this value is low you should try change the crossover probability or the scaling factor;**count_general_enhances:**returns the mean number of enhancements of the best individual per generation. Its a good metric to know if the algorithm is improving the solution or not. If this value is low you should try change the crossover probability or the scaling factor.**count_best_enhances:**

## Parallelizing

Since it is an stochastic algorithm one might run it at least a few times to make sure of the results. To run the algorithm several times in parallel use:

import numpy as np from DE import DifferentialEvolution as DE # define a function from R^n to R to minimize def f(x): return np.dot(x, x) # set the dimension of the search space n = 2 # init a DifferentialEvolution instance with 10 trials optimizer = DE(f, n, iterations=1000, trials=10) # runs the optimization using 5 processors optimizer.run(processes=5) # then you can save your results optimizer.write_results('results.json')

It is also possible to entry with one seed to each trial:

optimizer = DE(f, n, iterations=1000, trials=10, seed=range(1,11))

Returning:

{ "x": [ -1.633857962321789e-18, 4.967348489700612e-19 ], "fx": 2.916237351223618e-36, "mean_general_enhances": 0.2606, "mean_best_enhances": 0.3434, "general_enhances_at_the_best": 0.28925, "best_enhances_at_the_best": 0.382, "seed_of_the_best": 6 }

Where:

is the optima;**x:**is the value of f(x);**fx:**returns the mean number of enhancements per individual per generation over all trials;**mean_general_enhances:**returns the mean number of enhancements of the best individual per generation over all trials;**mean_best_enhances:**returns the mean number of enhancements per individual per generation on the best trial;**general_enhances_at_the_best:**returns the mean number of enhancements of the best individual per generation on the best trial;**best_enhances_at_the_best:**seed from which the optima over all trials were reached.**seed_of_the_best (only if seed is provided):**

## An overview of the parameters

The following code shows the full use of the algorithm (also see full.py):

optimizer = DE( f, n, N=2*n, crossover_p=False, scaling_factor=0.75, populate_method='cube', populate_data=(0,1), iterations=100, base_change=False, get_history=False, seed=False, trials=1 ) optimizer.run(processes=1)

The parameters are:

the objective function, must take a numpy array of size**f:**as input and returns a real number;**n**the dimension of the search space;**n:**population size (see [4]);**N (optional, default is 2n or 4 if n<2):**crossover probability (see [4]);**crossover_p (optional, default is the optimum value from [5]):**scaling factor (see [4]);**scaling_factor (optional, default is 0.75):**method to sort the initial population, must be 'cube', 'sphere' or 'given':**populate_method (optional, default is cube):**- If 'cube' then the initial population will be drawn independently and uniformly on the cube [low, high]^n where
;**populate_data=(low,hign)** - If 'sphere' then the initial population will be drawn independently and uniformly on the sphere with center
and radius**loc**where**radius**;**populate_data=(loc,radius)** - If 'given' then the initial population will be given by the user at the parameter
, it must be a ndarray of shape (n,N) where each individual is a column;**populate_data**

- If 'cube' then the initial population will be drawn independently and uniformly on the cube [low, high]^n where
as explained above;**populate_data (optional, default is (0,1)):**: the number of iterations until stop;**iterations (recommended, default is 100)**: False if the user don't want to change basis during the algorithm or an integer**base_changes (optional, default is False)**such that after each**k**iterations the algorithm will perform the change of basis (see [5]);**k**: True if the user wants to output the full history of the iterations, returning the stage of the algorithm at each iteration;**get_history (optional, default is False)**: an integer if**seed (optional, default is False)**or an list of**trials == 1**integers if**k > 1**;**trials == k**: the number of trials, each one wil execute the algorithm again with a different seed.**trials (optional, default is 1)**

And also for the * run()* method:

: the number of parallel processes to use.**processes (optional, default is 1)**

## Reference

[1] Rainer Storn. *On the usage of differential evolution for function optimization.* In Proceedings of North American Fuzzy Information Processing, pages 519–523. IEEE, 1996.

[2] Rainer Storn and Kenneth Price. *Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces.* Journal of global optimization, 11(4):341–359, 1997.

[3] Kenneth V Price. *Differential evolution: a fast and simple numerical optimizer.* In Proceedings of North American Fuzzy Information Processing, pages 524–527. IEEE, 1996.

[4] Karol R Opara and Jaroslaw Arabas. *Differential evolution: A survey of theoretical analyses.* Swarm and evolutionary computation, 44:546–558, 2019.

[5] (wait)

## Project details

## Release history Release notifications | RSS feed

## Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size | File type | Python version | Upload date | Hashes |
---|---|---|---|---|

Filename, size PyOptDE-1.0.tar.gz (9.2 kB) | File type Source | Python version None | Upload date | Hashes View |

Filename, size PyOptDE-1.0-py3-none-any.whl (23.9 kB) | File type Wheel | Python version py3 | Upload date | Hashes View |