Results 1 
5 of
5
InformationGeometric Optimization Algorithms: A Unifying Picture via Invariance Principles
"... We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space
Objective Improvement in InformationGeometric Optimization ABSTRACT
"... InformationGeometric Optimization (IGO) is a unified framework of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimization problem into a new maximization problem on the parameter space of the probability distributions. IGO up ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
InformationGeometric Optimization (IGO) is a unified framework of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimization problem into a new maximization problem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural gradient, taken with respect to the Fisher metric on the parameter manifold, aiming at maximizing an adaptive transform of the objective function. IGO recovers several known algorithms as particular instances: for the family of Bernoulli distributions IGO recovers PBIL, for the family of Gaussian distributions the pure rank
ON PROVING LINEAR CONVERGENCE OF COMPARISONBASED STEPSIZE ADAPTIVE RANDOMIZED SEARCH ON SCALINGINVARIANT FUNCTIONS VIA STABILITY OF MARKOV CHAINS
"... Abstract. In the context of numerical optimization, this paper develops a methodology to analyze the linear convergence of comparisonbased stepsize adaptive randomized search (CBSARS), a class of probabilistic derivativefree optimization algorithms where the function is solely used through comp ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. In the context of numerical optimization, this paper develops a methodology to analyze the linear convergence of comparisonbased stepsize adaptive randomized search (CBSARS), a class of probabilistic derivativefree optimization algorithms where the function is solely used through comparisons of candidate solutions. Various algorithms are included in the class of CBSARS algorithms. On the one hand, a few methods introduced already in the 60’s: the stepsize adaptive random search by Schumer and Steiglitz, the compound random search by Devroye and simplified versions of Matyas ’ random optimization algorithm or Kjellstrom and Taxen Gaussian adaptation. On the other hand, it includes simplified versions of several recent algorithms: the covariancematrixadaptation evolution strategy algorithm (CMAES), the exponential natural evolution strategy (xNES), or the cross entropy method. CBSARS algorithms typically exhibit several invariances. First of all, invariance to composing the objective function with a strictly monotonic transformation which is a direct consequence of the fact that the algorithms only use comparisons. Second, scale invariance that translates the fact that the algorithm has no intrinsic absolute notion of scale. The algorithms are investigated on scalinginvariant functions defined as functions that preserve
ABSTRACT
, 2013
"... of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimizationproblemintoanewmaximizationproblem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural ..."
Abstract
 Add to MetaCart
(Show Context)
of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimizationproblemintoanewmaximizationproblem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural gradient, taken with respect to the Fisher metric on the parameter manifold, aiming at maximizing an adaptive transform of the objective function. IGO recovers several known algorithms as particular instances: for the family of Bernoulli distributions IGO recovers PBIL, for the family of Gaussian distributions the pure rankµ CMAES update is recovered, and for exponential families in expectation parametrization the crossentropy/ML method is recovered. This article providesatheoretical justification for theIGO framework, by proving that any step size not greater than 1 guarantees monotone improvement over the course of optimization, in terms of qquantile values of the objective function f. The range of admissible step sizes is independent of f and its domain. We extend the result to cover the case of different step sizes for blocks of the parameters in the IGO algorithm. Moreover, we prove that expected fitness improves over time when fitnessproportional selection is applied, in which case the RPP algorithm is recovered.