First-order Optimization for Superquantile-based Supervised Learning - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

First-order Optimization for Superquantile-based Supervised Learning

Yassine Laguel
  • Fonction : Auteur
  • PersonId : 1078337
Jérôme Malick
Zaid Harchaoui
  • Fonction : Auteur
  • PersonId : 1025351

Résumé

Classical supervised learning via empirical risk (or negative log-likelihood) minimization hinges upon the assumption that the testing distribution coincides with the training distribution. This assumption can be challenged in modern applications of machine learning in which learning machines may operate at prediction time with testing data whose distribution departs from the one of the training data. We revisit the superquantile regression method by proposing a first-order optimization algorithm to minimize a superquantile-based learning objective. The proposed algorithm is based on smoothing the superquantile function by infimal convolution. Promising numerical results illustrate the interest of the approach towards safer supervised learning.
Fichier principal
Vignette du fichier
spqr-paper.pdf (309.25 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02953846 , version 1 (30-09-2020)

Identifiants

Citer

Yassine Laguel, Jérôme Malick, Zaid Harchaoui. First-order Optimization for Superquantile-based Supervised Learning. IEEE International Workshop on Machine Learning for Signal Processing, Sep 2020, Espoo, Finland. ⟨10.1109/MLSP49062.2020.9231909⟩. ⟨hal-02953846⟩
50 Consultations
60 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More