`cma.evolution_strategy.CMAEvolutionStrategy(interfaces.OOOptimizer)`

class documentation`cma.evolution_strategy`

(View In Hierarchy)
CMA-ES stochastic optimizer class with ask-and-tell interface.

`es = CMAEvolutionStrategy(x0, sigma0)``es = CMAEvolutionStrategy(x0, sigma0, opts)``es = CMAEvolutionStrategy(x0, sigma0).optimize(objective_fct)`res = CMAEvolutionStrategy(x0, sigma0, opts).optimize(objective_fct).result

`x0`

initial solution, starting point.

`x0`

is given as "phenotype" which means, if:opts = {'transformation': [transform, inverse]}

is given and

`inverse is None`, the initial mean is not consistent with`x0`

in that`transform(mean)`does not equal to`x0`

unless`transform(mean)`equals`mean`.`sigma0`

- initial standard deviation. The problem variables should
have been scaled, such that a single standard deviation
on all variables is useful and the optimum is expected to
lie within about
`x0`

+-`3*sigma0`. See also options`scaling_of_variables`

. Often one wants to check for solutions close to the initial point. This allows, for example, for an easier check of consistency of the objective function and its interfacing with the optimizer. In this case, a much smaller`sigma0`

is advisable. `opts`

- options, a dictionary with optional settings,
see class
`CMAOptions`

.

The interface is inherited from the generic `OOOptimizer`

class (see also there). An object instance is generated from:

es = cma.CMAEvolutionStrategy(8 * [0.5], 0.2)

The least verbose interface is via the optimize method:

es.optimize(objective_func) res = es.result

More verbosely, the optimization is done using the
methods `stop`

, `ask`

, and `tell`

:

while not es.stop(): solutions = es.ask() es.tell(solutions, [cma.ff.rosen(s) for s in solutions]) es.disp() es.result_pretty()

where `ask`

delivers new candidate solutions and `tell`

updates
the `optim`

instance by passing the respective function values
(the objective function `cma.ff.rosen`

can be replaced by any
properly defined objective function, see `cma.ff`

for more
examples).

To change an option, for example a termination condition to continue the optimization, call:

es.opts.set({'tolfacupx': 1e4})

The class `CMAEvolutionStrategy`

also provides:

(solutions, func_values) = es.ask_and_eval(objective_func)

and an entire optimization can also be written like:

while not es.stop(): es.tell(*es.ask_and_eval(objective_func))

Besides for termination criteria, in CMA-ES only the ranks of the
`func_values`

are relevant.

`inputargs`

: passed input arguments`inopts`

: passed options`opts`

: actually used options, some of them can be changed any time via`opts.set`, see class`CMAOptions`

`popsize`

: population size lambda, number of candidate solutions returned by`ask`

()`logger`

: a`CMADataLogger`

instance utilized by`optimize`

Super-short example, with output shown:

>>> import cma >>> # construct an object instance in 4-D, sigma0=1: >>> es = cma.CMAEvolutionStrategy(4 * [1], 1, {'seed':234}) ... # doctest: +ELLIPSIS (4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 4 (seed=234...)

and optimize the ellipsoid function

>>> es.optimize(cma.ff.elli, verb_disp=1) # doctest: +ELLIPSIS Iterat #Fevals function value axis ratio sigma min&max std t[m:s] 1 8 2.09... >>> assert len(es.result) == 8 >>> assert es.result[1] < 1e-9

The optimization loop can also be written explicitly:

>>> es = cma.CMAEvolutionStrategy(4 * [1], 1) # doctest: +ELLIPSIS (4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 4 (seed=... >>> while not es.stop(): ... X = es.ask() ... es.tell(X, [cma.ff.elli(x) for x in X]) ... es.disp() # doctest: +ELLIPSIS Iterat #Fevals function value axis ratio sigma min&max std t[m:s] 1 8 ...

achieving the same result as above.

An example with lower bounds (at zero) and handling infeasible solutions:

>>> import numpy as np >>> es = cma.CMAEvolutionStrategy(10 * [0.2], 0.5, ... {'bounds': [0, np.inf]}) #doctest: +ELLIPSIS (5_w,... >>> while not es.stop(): ... fit, X = [], [] ... while len(X) < es.popsize: ... curr_fit = None ... while curr_fit in (None, np.NaN): ... x = es.ask(1)[0] ... curr_fit = cma.ff.somenan(x, cma.ff.elli) # might return np.NaN ... X.append(x) ... fit.append(curr_fit) ... es.tell(X, fit) ... es.logger.add() ... es.disp() #doctest: +ELLIPSIS Itera... >>> >>> assert es.result[1] < 1e-9 >>> assert es.result[2] < 9000 # by internal termination >>> # es.logger.plot() # will plot data >>> # cma.s.figshow() # display plot window

An example with user-defined transformation, in this case to realize a lower bound of 2.

>>> import warnings >>> with warnings.catch_warnings(record=True) as warns: ... es = cma.CMAEvolutionStrategy(5 * [3], 0.1, ... {"transformation": [lambda x: x**2+1.2, None], ... "verbose": -2,}) >>> warns[0].message # doctest:+ELLIPSIS UserWarning('in class GenoPheno: user defined transformations have not been tested thoroughly ()'... >>> warns[1].message # doctest:+ELLIPSIS UserWarning('computed initial point... >>> es.optimize(cma.ff.rosen, verb_disp=0) #doctest: +ELLIPSIS <cma... >>> assert cma.ff.rosen(es.result[0]) < 1e-7 + 5.54781521192 >>> assert es.result[2] < 3300

The inverse transformation is (only) necessary if the `BoundPenalty`

boundary handler is used at the same time.

The `CMAEvolutionStrategy`

class also provides a default logger
(cave: files are overwritten when the logger is used with the same
filename prefix):

>>> es = cma.CMAEvolutionStrategy(4 * [0.2], 0.5, {'verb_disp': 0}) >>> es.logger.disp_header() # annotate the print of disp Iterat Nfevals function value axis ratio maxstd minstd >>> while not es.stop(): ... X = es.ask() ... es.tell(X, [cma.ff.sphere(x) for x in X]) ... es.logger.add() # log current iteration ... es.logger.disp([-1]) # display info for last iteration #doctest: +ELLIPSIS 1 ... >>> es.logger.disp_header() Iterat Nfevals function value axis ratio maxstd minstd >>> # es.logger.plot() # will make a plot

Example implementing restarts with increasing popsize (IPOP):

>>> bestever = cma.optimization_tools.BestSolution() >>> for lam in 10 * 2**np.arange(8): # 10, 20, 40, 80, ..., 10 * 2**7 ... es = cma.CMAEvolutionStrategy('6 - 8 * np.random.rand(9)', # 9-D ... 5, # initial std sigma0 ... {'popsize': lam, # options ... 'verb_append': bestever.evalsall}) ... logger = cma.CMADataLogger().register(es, append=bestever.evalsall) ... while not es.stop(): ... X = es.ask() # get list of new solutions ... fit = [cma.ff.rastrigin(x) for x in X] # evaluate each solution ... es.tell(X, fit) # besides for termination only the ranking in fit is used ... ... # display some output ... logger.add() # add a "data point" to the log, writing in files ... es.disp() # uses option verb_disp with default 100 ... ... print('termination:', es.stop()) ... cma.s.pprint(es.best.__dict__) ... ... bestever.update(es.best) ... ... # show a plot ... # logger.plot(); ... if bestever.f < 1e-8: # global optimum was hit ... break #doctest: +ELLIPSIS (5_w,... >>> assert es.result[1] < 1e-8

On the Rastrigin function, usually after five restarts the global optimum is located.

Using the `multiprocessing`

module, we can evaluate the function in
parallel with a simple modification of the example (however
multiprocessing seems not always reliable):

>>> from cma.fitness_functions import elli # cannot be an instance method >>> from cma.fitness_transformations import EvalParallel >>> es = cma.CMAEvolutionStrategy(22 * [0.0], 1.0, {'maxiter':10}) # doctest:+ELLIPSIS (6_w,13)-aCMA-ES (mu_w=... >>> with EvalParallel(es.popsize + 1) as eval_all: ... while not es.stop(): ... X = es.ask() ... es.tell(X, eval_all(elli, X)) ... es.disp() ... # es.logger.add() # doctest:+ELLIPSIS Iterat...

The final example shows how to resume:

>>> import pickle >>> >>> es = cma.CMAEvolutionStrategy(12 * [0.1], # a new instance, 12-D ... 0.12) # initial std sigma0 ... #doctest: +ELLIPSIS (5_w,... >>> es.optimize(cma.ff.rosen, iterations=100) #doctest: +ELLIPSIS I... >>> pickle.dump(es, open('_saved-cma-object.pkl', 'wb')) >>> del es # let's start fresh >>> >>> es = pickle.load(open('_saved-cma-object.pkl', 'rb')) >>> # resuming >>> es.optimize(cma.ff.rosen, verb_disp=200) #doctest: +ELLIPSIS 200 ... >>> assert es.result[2] < 15000 >>> assert cma.s.Mh.vequals_approximately(es.result[0], 12 * [1], 1e-5) >>> assert len(es.result) == 8

The following two enhancements are implemented, the latter is turned on by default for very small population size only.

*Active CMA* is implemented with option `CMA_active` and
conducts an update of the covariance matrix with negative weights.
The negative update is implemented, such that positive definiteness
is guarantied. A typical speed up factor (number of f-evaluations)
is between 1.1 and two.

References: Jastrebski and Arnold, Improving evolution strategies through active covariance matrix adaptation, CEC 2006. Hansen, The CMA evolution strategy: a tutorial, arXiv 2016.

*Selective mirroring* is implemented with option `CMA_mirrors`
in the method `get_mirror`

and `get_selective_mirrors`

.
The method `ask_and_eval`

(used by `fmin`

) will then sample
selectively mirrored vectors within the iteration
(`CMA_mirrormethod==1`). Otherwise, or if `CMA_mirromethod==2`,
selective mirrors are injected for the next iteration.
In selective mirroring, only the worst solutions are mirrored. With
the default small number of mirrors, *pairwise selection* (where at
most one of the two mirrors contribute to the update of the
distribution mean) is implicitly guarantied under selective
mirroring and therefore not explicitly implemented.

References: Brockhoff et al, PPSN 2010, Auger et al, GECCO 2011.

See Also | `fmin` (), `OOOptimizer` , `CMAOptions` , `plot` (), `ask` (),
`tell` (), `ask_and_eval` () |

Method | popsize | number of samples by default returned by `ask` () |

Method | stop | return the termination status as dictionary. |

Method | __init__ | see class `CMAEvolutionStrategy` |

Method | ask | get/sample new candidate solutions. |

Method | ask_geno | get new candidate solutions in genotyp. |

Method | random_rescale_to_mahalanobis | change `x` like for injection, all on genotypic level |

Method | get_mirror | return pheno(self.mean - (geno(x) - self.mean)). |

Method | ask_and_eval | sample `number` solutions and evaluate them on `func` . |

Method | get_selective_mirrors | get mirror genotypic directions from worst solutions. |

Method | tell | No summary |

Method | inject | inject list of one or several genotypic solution(s). |

Method | result | return a `CMAEvolutionStrategyResult` `namedtuple` . |

Method | result_pretty | pretty print result. |

Method | repair_genotype | make sure that solutions fit to the sample distribution. |

Method | manage_plateaus | increase `sigma` by `sigma_fac` in case of a plateau. |

Method | condition_number | condition number of the statistical-model sampler. |

Method | alleviate_conditioning_in_coordinates | pass scaling from `C` to `sigma_vec` . |

Method | alleviate_conditioning | pass conditioning of `C` to linear transformation in `self.gp` . |

Method | feed_for_resume | Resume a run using the solution history. |

Method | mahalanobis_norm | return Mahalanobis norm based on the current sample distribution. |

Method | isotropic_mean_shift | normalized last mean shift, under random selection N(0,I) |

Method | disp_annotation | print annotation line for `disp` () |

Method | disp | print current state variables in a single-line. |

Method | plot | plot current state variables using `matplotlib` . |

Method | _set_x0 | Assign `self.x0` from argument `x0` . |

Method | _random_rescaling_factor_to_mahalanobis_size | self.mean + self._random_rescaling_factor_to_mahalanobis_size(y) * y is guarantied to appear like from the sample distribution. |

Method | _prepare_injection_directions | provide genotypic directions for TPA and selective mirroring, with no specific length normalization, to be used in the coming iteration. |

Method | _updateBDfromSM | helper function for a smooth transition to sampling classes. |

def
stop(self, check=True, ignore_list=()):

return the termination status as dictionary.

With `check==False`, the termination conditions are not checked
and the status might not reflect the current situation.
`stop().clear()` removes the currently active termination
conditions.

As a convenience feature, keywords in `ignore_list`

are removed from
the conditions.

def
_set_x0(self, x0):

Assign `self.x0`

from argument `x0`

.

Input `x0`

may be a `callable`

or a string (deprecated) or a
`list`

or `numpy.ndarray`

of the desired length.

Below an artificial example is given, where calling `x0`

delivers in the first two calls `dimension * [5]` and in
succeeding calls``dimension * [0.01]``. Only the initial value of
0.01 solves the Rastrigin function:

>>> import cma >>> class X0: ... def __init__(self, dimension): ... self.irun = 0 ... self.dimension = dimension ... def __call__(self): ... ... self.irun += 1 ... return (self.dimension * [5] if self.irun < 3 ... else self.dimension * [0.01]) >>> xopt, es = cma.fmin2(cma.ff.rastrigin, X0(3), 0.01, ... {'verbose':-9}, restarts=1) >>> assert es.result.fbest > 1e-5 >>> xopt, es = cma.fmin2(cma.ff.rastrigin, X0(3), 0.01, ... {'verbose':-9}, restarts=2) >>> assert es.result.fbest < 1e-5 # third run succeeds due to x0

def
ask(self, number=None, xmean=None, sigma_fac=1, gradf=None, args=()):

get/sample new candidate solutions.

Solutions are sampled from a multi-variate normal distribution and transformed to f-representation (phenotype) to be evaluated.

`number`

- number of returned solutions, by default the population size
popsize(AKAlambda).`xmean`

- distribution mean, phenotyp?
`sigma_fac`

- multiplier for internal sample width (standard deviation)
`gradf`

- gradient,
len(gradf(x)) == len(x), ifgradf is not Nonethe third solution in the returned list is "sampled" in supposedly Newton directionnp.dot(C, gradf(xmean, *args)).`args`

- additional arguments passed to gradf

A list of N-dimensional candidate solutions to be evaluated

>>> import cma >>> es = cma.CMAEvolutionStrategy([0,0,0,0], 0.3) #doctest: +ELLIPSIS (4_w,... >>> while not es.stop() and es.best.f > 1e-6: ... X = es.ask() # get list of new solutions ... fit = [cma.ff.rosen(x) for x in X] # call fct with each solution ... es.tell(X, fit) # feed values

See Also | `ask_and_eval` , `ask_geno` , `tell` |

def
ask_geno(self, number=None, xmean=None, sigma_fac=1):

get new candidate solutions in genotyp.

Solutions are sampled from a multi-variate normal distribution.

- Arguments are
`number`

- number of returned solutions, by default the
population size
`popsize`

(AKA lambda). `xmean`

- distribution mean
`sigma_fac`

- multiplier for internal sample width (standard deviation)

`ask_geno`

returns a list of N-dimensional candidate solutions
in genotyp representation and is called by `ask`

.

Details: updates the sample distribution if needed and might change the geno-pheno transformation during this update.

See Also | `ask` , `ask_and_eval` |

def
_random_rescaling_factor_to_mahalanobis_size(self, y):

def
get_mirror(self, x, preserve_length=False):

return `pheno(self.mean - (geno(x) - self.mean))`.

>>> import numpy as np, cma >>> es = cma.CMAEvolutionStrategy(np.random.randn(3), 1) #doctest: +ELLIPSIS (3_w,... >>> x = np.random.randn(3) >>> assert cma.utilities.math.Mh.vequals_approximately(es.mean - (x - es.mean), es.get_mirror(x, preserve_length=True)) >>> x = es.ask(1)[0] >>> vals = (es.get_mirror(x) - es.mean) / (x - es.mean) >>> assert cma.utilities.math.Mh.equals_approximately(sum(vals), len(vals) * vals[0])

TODO: this implementation is yet experimental.

TODO: this implementation includes geno-pheno transformation, however in general GP-transformation should be separated from specific code.

Selectively mirrored sampling improves to a moderate extend but overadditively with active CMA for quite understandable reasons.

Optimal number of mirrors are suprisingly small: 1,2,3 for maxlam=7,13,20 where 3,6,10 are the respective maximal possible mirrors that must be clearly suboptimal.

def
ask_and_eval(self, func, args=(), gradf=None, number=None, xmean=None, sigma_fac=1, evaluations=1, aggregation=np.median, kappa=1, parallel_mode=False):

sample `number`

solutions and evaluate them on `func`

.

Each solution `s` is resampled until
`self.is_feasible(s, func(s)) is True`.

`func`

:- objective function,
`func(x)`accepts a`numpy.ndarray`

and returns a scalar`if not parallel_mode`. Else returns a`list`

of scalars from a`list`

of`numpy.ndarray`

. `args`

:- additional parameters for
`func`

`gradf`

:- gradient of objective function,
`g = gradf(x, *args)`must satisfy`len(g) == len(x)` `number`

:- number of solutions to be sampled, by default
population size
`popsize`(AKA lambda) `xmean`

:- mean for sampling the solutions, by default
`self.mean`. `sigma_fac`

:- multiplier for sampling width, standard deviation, for example
to get a small perturbation of solution
`xmean`

`evaluations`

:- number of evaluations for each sampled solution
`aggregation`

:- function that aggregates
`evaluations`

values to as single value. `kappa`

:- multiplier used for the evaluation of the solutions, in
that
`func(m + kappa*(x - m))`is the f-value for`x`.

`(X, fit)`, where

`X`

: list of solutions`fit`

: list of respective function values

While `not self.is_feasible(x, func(x))` new solutions are
sampled. By default `self.is_feasible == cma.feasible == lambda x, f: f not in (None, np.NaN)`.
The argument to `func`

can be freely modified within `func`

.

Depending on the `CMA_mirrors` option, some solutions are not
sampled independently but as mirrors of other bad solutions. This
is a simple derandomization that can save 10-30% of the
evaluations in particular with small populations, for example on
the cigar function.

>>> import cma >>> x0, sigma0 = 8 * [10], 1 # 8-D >>> es = cma.CMAEvolutionStrategy(x0, sigma0) #doctest: +ELLIPSIS (5_w,... >>> while not es.stop(): ... X, fit = es.ask_and_eval(cma.ff.elli) # handles NaN with resampling ... es.tell(X, fit) # pass on fitness values ... es.disp(20) # print every 20-th iteration #doctest: +ELLIPSIS Iterat #Fevals... >>> print('terminated on ' + str(es.stop())) #doctest: +ELLIPSIS terminated on ...

A single iteration step can be expressed in one line, such that an entire optimization after initialization becomes:

while not es.stop(): es.tell(*es.ask_and_eval(cma.ff.elli))

def
get_selective_mirrors(self, number=None):

get mirror genotypic directions from worst solutions.

Details:

To be called after the mean has been updated.

Takes the last `number=sp.lam_mirr` entries in the
`self.pop[self.fit.idx]` as solutions to be mirrored.

Do not take a mirror if it is suspected to stem from a previous mirror in order to not go endlessly back and forth.

def
tell(self, solutions, function_values, check_points=None, copy=False):

pass objective function values to prepare for next iteration. This core procedure of the CMA-ES algorithm updates all state variables, in particular the two evolution paths, the distribution mean, the covariance matrix and a step-size.

`solutions`

- list or array of candidate solution points (of
type
`numpy.ndarray`

), most presumably before delivered by method`ask()`

or`ask_and_eval()`

. `function_values`

- list or array of objective function values
corresponding to the respective points. Beside for termination
decisions, only the ranking of values in
`function_values`

is used. `check_points`

- If
`check_points is None`, only solutions that are not generated by`ask()`

are possibly clipped (recommended).`False`does not clip any solution (not recommended). If`True`, clips solutions that realize long steps (i.e. also those that are unlikely to be generated with`ask()`

).`check_points`

can be a list of indices to be checked in solutions. `copy`

`solutions`can be modified in this routine, if`copy is False`

`tell()`

updates the parameters of the multivariate
normal search distribution, namely covariance matrix and
step-size and updates also the attributes `countiter` and
`countevals`. To check the points for consistency is quadratic
in the dimension (like sampling points).

The effect of changing the solutions delivered by `ask()`

depends on whether boundary handling is applied. With boundary
handling, modifications are disregarded. This is necessary to
apply the default boundary handling that uses unrepaired
solutions but might change in future.

>>> import cma >>> func = cma.ff.sphere # choose objective function >>> es = cma.CMAEvolutionStrategy(np.random.rand(2) / 3, 1.5) ... # doctest:+ELLIPSIS (3_... >>> while not es.stop(): ... X = es.ask() ... es.tell(X, [func(x) for x in X]) >>> es.result # result is a `namedtuple` # doctest:+ELLIPSIS CMAEvolutionStrategyResult(xbest=array([...

See Also | class `CMAEvolutionStrategy` , `ask` , `ask_and_eval` , `fmin` |

def
inject(self, solutions, force=None):

inject list of one or several genotypic solution(s).

Unless `force is True`

, the solutions are used as direction
relative to the distribution mean to compute a new candidate
solution returned in method `ask_geno`

which in turn is used in
method `ask`

. `inject`

is to be called before `ask`

or after
`tell`

and can be called repeatedly.

>>> import cma >>> es = cma.CMAEvolutionStrategy(4 * [1], 2) #doctest: +ELLIPSIS (4_w,... >>> while not es.stop(): ... es.inject([4 * [0.0]]) ... X = es.ask() ... if es.countiter == 0: ... assert X[0][0] == X[0][1] # injected sol. is on the diagonal ... es.tell(X, [cma.ff.sphere(x) for x in X])

@property

def result(self):

def result(self):

return a

`CMAEvolutionStrategyResult`

`namedtuple`

.See Also | `cma.evolution_strategy.CMAEvolutionStrategyResult`
or try help(...result) on the result property
of an `CMAEvolutionStrategy` instance or on the
`CMAEvolutionStrategyResult` instance itself. |

def
repair_genotype(self, x, copy_if_changed=False):

make sure that solutions fit to the sample distribution.

This interface is versatile and likely to change.

In particular the frequency of `x - self.mean` being long in
Mahalanobis distance is limited, currently clipping at
`N**0.5 + 2 * N / (N + 2)` is implemented.

def
manage_plateaus(self, sigma_fac=1.5, sample_fraction=0.5):

increase `sigma`

by `sigma_fac`

in case of a plateau.

A plateau is assumed to be present if the best sample and
`popsize * sample_fraction`-th best sample have the same
fitness.

Example:

>>> import cma >>> def f(X): ... return (len(X) - 1) * [1] + [2] >>> es = cma.CMAEvolutionStrategy(4 * [0], 1) #doctest: +ELLIPSIS (4_w,... >>> while not es.stop(): ... X = es.ask() ... es.tell(X, f(X)) ... es.logger.add() ... es.manage_plateaus() >>> assert es.sigma > 1.5**5

@property

def condition_number(self):

def condition_number(self):

condition number of the statistical-model sampler.

Details: neither encoding/decoding from `sigma_vec`

-scaling nor
`gp`

-transformation are taken into account for this computation.

def
alleviate_conditioning_in_coordinates(self, condition=100000000.0):

pass scaling from `C`

to `sigma_vec`

.

As a result, `C`

is a correlation matrix, i.e., all diagonal
entries of `C`

are `1`

.

def
alleviate_conditioning(self, condition=1000000000000.0):

pass conditioning of `C`

to linear transformation in `self.gp`

.

Argument `condition`

defines the limit condition number above
which the action is taken.

Details: the action applies only if `self.gp.isidentity`

. Then,
the covariance matrix `C`

is set (back) to identity and a
respective linear transformation is "added" to `self.gp`

.

def
_updateBDfromSM(self, sm_=None):

helper function for a smooth transition to sampling classes.

By now all tests run through without this method in effect. Gradient injection and noeffectaxis however rely on the non-documented attributes B and D in the sampler.

def
feed_for_resume(self, X, function_values):

Resume a run using the solution history.

CAVEAT: this hasn't been thoroughly tested or in intensive use.

Given all "previous" candidate solutions and their respective
function values, the state of a `CMAEvolutionStrategy`

object
can be reconstructed from this history. This is the purpose of
function `feed_for_resume`

.

`X`

:- (all) solution points in chronological order, phenotypic representation. The number of points must be a multiple of popsize.
`function_values`

:- respective objective function values

`feed_for_resume`

can be called repeatedly with only parts of
the history. The part must have the length of a multiple
of the population size.
`feed_for_resume`

feeds the history in popsize-chunks into `tell`

.
The state of the random number generator might not be
reconstructed, but this would be only relevant for the future.

import cma # prepare (x0, sigma0) = ... # initial values from previous trial X = ... # list of generated solutions from a previous trial f = ... # respective list of f-values # resume es = cma.CMAEvolutionStrategy(x0, sigma0) es.feed_for_resume(X, f) # continue with func as objective function while not es.stop(): X = es.ask() es.tell(X, [func(x) for x in X])

Credits to Dirk Bueche and Fabrice Marchal for the feeding idea.

See Also | class `CMAEvolutionStrategy` for a simple dump/load
to resume. |

def
mahalanobis_norm(self, dx):

return Mahalanobis norm based on the current sample distribution.

The norm is based on Covariance matrix `C` times `sigma**2`,
and includes `sigma_vec`. The expected Mahalanobis distance to
the sample mean is about `sqrt(dimension)`.

A *genotype* difference `dx`

.

>>> import cma, numpy >>> es = cma.CMAEvolutionStrategy(numpy.ones(10), 1) #doctest: +ELLIPSIS (5_w,... >>> xx = numpy.random.randn(2, 10) >>> d = es.mahalanobis_norm(es.gp.geno(xx[0]-xx[1]))

`d`

is the distance "in" the true sample distribution,
sampled points have a typical distance of `sqrt(2*es.N)`,
where `es.N` is the dimension, and an expected distance of
close to `sqrt(N)` to the sample mean. In the example,
`d`

is the Euclidean distance, because C = I and sigma = 1.

@property

def isotropic_mean_shift(self):

def isotropic_mean_shift(self):

normalized last mean shift, under random selection N(0,I)

distributed.

Caveat: while it is finite and close to sqrt(n) under random
selection, the length of the normalized mean shift under
*systematic* selection (e.g. on a linear function) tends to
infinity for mueff -> infty. Hence it must be used with great
care for large mueff.

def
disp(self, modulo=None):

print current state variables in a single-line.

Prints only if `iteration_counter % modulo == 0`.

See Also | `disp_annotation` . |