Optimistic Mirror Descent in Saddle-Point Problems: Going the Extra (Gradient) Mile - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Optimistic Mirror Descent in Saddle-Point Problems: Going the Extra (Gradient) Mile

Résumé

Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality-a property which we call coherence. We first show that ordinary, "vanilla" MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an "extra-gradient" step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets).
Fichier principal
Vignette du fichier
Main.pdf (3.47 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02111937 , version 1 (26-04-2019)
hal-02111937 , version 2 (11-06-2019)

Identifiants

  • HAL Id : hal-02111937 , version 2

Citer

Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, et al.. Optimistic Mirror Descent in Saddle-Point Problems: Going the Extra (Gradient) Mile. ICLR 2019 - 7th International Conference on Learning Representations, May 2019, New Orleans, United States. pp.1-23. ⟨hal-02111937v2⟩
405 Consultations
690 Téléchargements

Partager

Gmail Facebook X LinkedIn More