Results

**1 - 4**of**4**### Large deviation principle for invariant distributions of memory gradient diffusions

"... In this paper, we consider a class of diffusions based on a memory gradient descent, i.e. whose drift term is built as the average all along the past of the trajectory of the gradient of a coercive function U. Under some classical assumptions on U, this type of diffusion is ergodic and admits a uniq ..."

Abstract
- Add to MetaCart

In this paper, we consider a class of diffusions based on a memory gradient descent, i.e. whose drift term is built as the average all along the past of the trajectory of the gradient of a coercive function U. Under some classical assumptions on U, this type of diffusion is ergodic and admits a unique invariant distribution. With the view to optimization applications, we want to understand the behaviour of the invariant distribution when the diffusion coefficient goes to 0. In the non-memory case, the invariant distribution is explicit and the so-called Laplace method shows that a Large Deviation Principle (LDP) holds with an explicit rate function. In particular, such a result leads to a concentration of the invariant distribution around the global minima of U. Here, except in the linear case, we have no closed formula for the invariant distribution but we prove that a LDP can still be obtained. Then, in the one-dimensional case and under some assumptions on the second derivative of U, we get some bounds for the rate function that lead to the concentration around the global minima.

### Piecewise deterministic simulated annealing

, 2014

"... Given an energy potential on the Euclidian space, a piecewise deterministic Markov process is designed to sample the corresponding Gibbs measure. In dimension one an Eyring–Kramers formula is obtained for the exit time of the domain of a local minimum at low temperature, and a necessary and sufficie ..."

Abstract
- Add to MetaCart

(Show Context)
Given an energy potential on the Euclidian space, a piecewise deterministic Markov process is designed to sample the corresponding Gibbs measure. In dimension one an Eyring–Kramers formula is obtained for the exit time of the domain of a local minimum at low temperature, and a necessary and sufficient condition is given on the cooling sched-ule in a simulated annealing algorithm to ensure the process converges to the set of global minima. This condition is similar to the classical one for diffusions and involves the critical depth of the potential. 1 Introduction and main results 1.1 Simulated annealing The simulated annealing algorithm is a classical stochastic optimization algorithm, which can be seen as a descent algorithm perturbed by random locally counter-productive moves to escape non-global minima. More precisely, consider U such that e−U ∈ L1(Rd) an integrable