Large Deviation Principle for invariant distributions of Memory Gradient Diffusions - Université Toulouse III - Paul Sabatier - Toulouse INP Accéder directement au contenu
Article Dans Une Revue Electronic Journal of Probability Année : 2013

Large Deviation Principle for invariant distributions of Memory Gradient Diffusions

Résumé

In this paper, we consider a class of diffusion processes based on a memory gradient descent, i.e. whose drift term is built as the average all along the past of the trajectory of the gradient of a coercive function U . Under some classical assumptions on U , this type of diffusion is ergodic and admits a unique invariant distribution. In view to optimization applications, we want to understand the behaviour of the invariant distribution when the diffusion coefficient goes to 0. In the non-memory case, the invariant distribution is explicit and the so-called Laplace method shows that a Large Deviation Principle (LDP) holds with an explicit rate function, that leads to a concentration of the invariant distribution around the global minima of U . Here, except in the linear case, we have no closed formula for the invariant distribution but we show that a LDP can still be obtained. Then, in the one- dimensional case, we get some bounds for the rate function that lead to the concentration around the global minimum under some assumptions on the second derivative of U .
Fichier principal
Vignette du fichier
GPP_EJP_revision_Long_26_11_fab.pdf (422.81 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00759188 , version 1 (30-11-2012)

Identifiants

  • HAL Id : hal-00759188 , version 1

Citer

Sébastien Gadat, Fabien Panloup, Clément Pellegrini. Large Deviation Principle for invariant distributions of Memory Gradient Diffusions. Electronic Journal of Probability, 2013, vol. 18, paper n° 81, 34 p. ⟨hal-00759188⟩
276 Consultations
129 Téléchargements

Partager

Gmail Facebook X LinkedIn More