Regret minimization in stochastic non-convex learning via a proximal-gradient approach - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Regret minimization in stochastic non-convex learning via a proximal-gradient approach

Résumé

Motivated by applications in machine learning and operations research, we study regret minimization with stochastic first-order oracle feedback in online constrained, and possibly non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach, so we focus on a local regret measure defined via a proximal-gradient mapping. To achieve no (local) regret in this setting, we develop a prox-grad method based on stochastic first-order feedback, and a simpler method for when access to a perfect first-order oracle is possible. Both methods are min-max order-optimal, and we also establish a bound on the number of prox-grad queries these methods require. As an important application of our results, we also obtain a link between online and offline non-convex stochastic optimization manifested as a new prox-grad scheme with complexity guarantees matching those obtained via variance reduction techniques.
Fichier principal
Vignette du fichier
OnlineProxGrad.pdf (502.71 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03043872 , version 1 (07-12-2020)

Identifiants

  • HAL Id : hal-03043872 , version 1

Citer

Nadav Hallak, Panayotis Mertikopoulos, Volkan Cevher. Regret minimization in stochastic non-convex learning via a proximal-gradient approach. ICML 2021 - 38th International Conference on Machine Learning, Jul 2021, Vienna, Austria. ⟨hal-03043872⟩
73 Consultations
115 Téléchargements

Partager

Gmail Facebook X LinkedIn More