Package mystic :: Module differential_evolution

Module differential_evolution

source code


This module contains a collection of optimization routines based on Storn and Price's differential evolution algorithm. The core solver algorithm was adapted from Phillips's An alternate solver is provided that follows the logic in Price, Storn, and Lampen -- in that both a current generation and a trial generation are maintained, and all vectors for creating difference vectors and mutations draw from the current generation... which remains invariant until the end of the iteration.

A minimal interface that mimics a scipy.optimize interface has also been implemented, and functionality from the mystic solver API has been added with reasonable defaults. DifferentialEvolutionSolver2 is used by the minimal interface, unless 'invariant_current' is set to False.

Minimal function interface to optimization routines:

   diffev      -- Differential Evolution (DE) solver

The corresponding solvers built on mystic's AbstractSolver are:

   DifferentialEvolutionSolver  -- a DE solver
   DifferentialEvolutionSolver2 -- Storn & Price's DE solver

Mystic solver behavior activated in deffev:

   - EvaluationMonitor = Sow()
   - StepMonitor = Sow()
   - enable_signal_handler()
   - strategy = Best1Exp
   - termination = ChangeOverGeneration(ftol,gtol), if gtol provided
         ''      = VTR(ftol), otherwise

Storn & Price's DE Solver has also been implemented to use the "map" interface. Mystic enables the user to override the standard python map function with their own 'map' function, or one of the map functions provided by the pathos package (see for distributed and high-performance computing.


Practical advice for how to configure the Differential Evolution Solver for your own objective function can be found on R. Storn's web page (, and is reproduced here:

   First try the following classical settings for the solver configuration:
   Choose a crossover strategy (e.g. Rand1Bin), set the number of parents
   NP to 10 times the number of parameters, select ScalingFactor=0.8, and

   It has been found recently that selecting ScalingFactor from the interval
   [0.5, 1.0] randomly for each generation or for each difference vector,
   a technique called dither, improves convergence behaviour significantly,
   especially for noisy objective functions.

   It has also been found that setting CrossProbability to a low value,
   e.g. CrossProbability=0.2 helps optimizing separable functions since
   it fosters the search along the coordinate axes. On the contrary,
   this choice is not effective if parameter dependence is encountered,
   something which is frequently occuring in real-world optimization
   problems rather than artificial test functions. So for parameter
   dependence the choice of CrossProbability=0.9 is more appropriate.

   Another interesting empirical finding is that rasing NP above, say, 40
   does not substantially improve the convergence, independent of the
   number of parameters. It is worthwhile to experiment with these suggestions.
   Make sure that you initialize your parameter vectors by exploiting
   their full numerical range, i.e. if a parameter is allowed to exhibit
   values in the range [-100, 100] it's a good idea to pick the initial
   values from this range instead of unnecessarily restricting diversity.

   Keep in mind that different problems often require different settings
   for NP, ScalingFactor and CrossProbability (see Ref 1, 2). If you
   experience misconvergence, you typically can increase the value for NP,
   but often you only have to adjust ScalingFactor to be a little lower or
   higher than 0.8. If you increase NP and simultaneously lower ScalingFactor
   a little, convergence is more likely to occur but generally takes longer,
   i.e. DE is getting more robust (a convergence speed/robustness tradeoff).

   If you still get misconvergence you might want to instead try a different
   crossover strategy. The most commonly used are Rand1Bin, Rand1Exp,
   Best1Bin, and Best1Exp. The crossover strategy is not so important a
   choice, although K. Price claims that binomial (Bin) is never worse than
   exponential (Exp).

   In case of continued misconvergence, check the choice of objective function.
   There might be a better one to describe your problem. Any knowledge that
   you have about the problem should be worked into the objective function.
   A good objective function can make all the difference.

See `mystic.examples.test_rosenbrock` for an example of using DifferentialEvolutionSolver. DifferentialEvolutionSolver2 has the identical interface and usage.

All solvers included in this module provide the standard signal handling. For more information, see `mystic.mystic.abstract_solver`.


[1] Storn, R. and Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization 11: 341-359, 1997.

[2] Price, K., Storn, R., and Lampinen, J. - Differential Evolution, A Practical Approach to Global Optimization. Springer, 1st Edition, 2005

Differential Evolution optimization.
Differential Evolution optimization, using Storn and Price's algorithm.
diffev(func, x0, npop, args=(), bounds=None, ftol=5e-3, gtol=None, maxiter=None, maxfun=None, cross=1.0, scale=0.9, full_output=0, disp=1, retall=0, callback=None, invariant_current=True)
Minimize a function using differential evolution.
source code
Function Details

diffev(func, x0, npop, args=(), bounds=None, ftol=5e-3, gtol=None, maxiter=None, maxfun=None, cross=1.0, scale=0.9, full_output=0, disp=1, retall=0, callback=None, invariant_current=True)

source code 
Minimize a function using differential evolution.


    Uses a differential evolution algorith to find the minimum of
    a function of one or more variables. Mimics a scipy.optimize style


    func -- the Python function or method to be minimized.
    x0 -- the initial guess (ndarray), if desired to start from a
        set point; otherwise takes an array of (min,max) bounds,
        for when random initial points are desired
    npop -- size of the trial solution population.

Additional Inputs:

    args -- extra arguments for func.
    bounds -- list - n pairs of bounds (min,max), one pair for each parameter.
    ftol -- number - acceptable relative error in func(xopt) for convergence.
    gtol -- number - maximum number of iterations to run without improvement.
    maxiter -- number - the maximum number of iterations to perform.
    maxfun -- number - the maximum number of function evaluations.
    cross -- number - the probability of cross-parameter mutations
    scale -- number - multiplier for impact of mutations on trial solution.
    full_output -- number - non-zero if fval and warnflag outputs are desired.
    disp -- number - non-zero to print convergence messages.
    retall -- number - non-zero to return list of solutions at each iteration.
    callback -- an optional user-supplied function to call after each
        iteration.  It is called as callback(xk), where xk is the
        current parameter vector.
    invariant_current -- set to True to call DifferentialEvolutionSolver2,
        otherwise call DifferentialEvolutionSolver

Returns: (xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

    xopt -- ndarray - minimizer of function
    fopt -- number - value of function at minimum: fopt = func(xopt)
    iter -- number - number of iterations
    funcalls -- number - number of function calls
    warnflag -- number - Integer warning flag:
        1 : 'Maximum number of function evaluations.'
        2 : 'Maximum number of iterations.'
    allvecs -- list - a list of solutions at each iteration