Learning Submodular Losses with the Lovász Hinge - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Learning Submodular Losses with the Lovász Hinge

Résumé

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surro-gates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for sub-modular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.
Fichier principal
Vignette du fichier
YU_ICML2015.pdf (1.13 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01151823 , version 1 (13-05-2015)

Identifiants

  • HAL Id : hal-01151823 , version 1

Citer

Jiaqian Yu, Matthew Blaschko. Learning Submodular Losses with the Lovász Hinge. International Conference on Machine Learning, Jul 2015, Lille, France. ⟨hal-01151823⟩
162 Consultations
762 Téléchargements

Partager

Gmail Facebook X LinkedIn More