Learning Wasserstein Embeddings - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Learning Wasserstein Embeddings

Résumé

The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
Fichier principal
Vignette du fichier
iclr2018_main.pdf (1.49 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01956306 , version 1 (15-12-2018)

Identifiants

  • HAL Id : hal-01956306 , version 1

Citer

Nicolas Courty, Rémi Flamary, Mélanie Ducoffe. Learning Wasserstein Embeddings. ICLR 2018 - 6th International Conference on Learning Representations, Apr 2018, Vancouver, Canada. pp.1-13. ⟨hal-01956306⟩
321 Consultations
399 Téléchargements

Partager

Gmail Facebook X LinkedIn More