Learning a Text-Video Embedding from Incomplete and Heterogeneous Data - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Learning a Text-Video Embedding from Incomplete and Heterogeneous Data

Résumé

Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks.
Fichier principal
Vignette du fichier
bare_jrnl_compsoc.pdf (10.05 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01975102 , version 1 (09-01-2019)

Identifiants

  • HAL Id : hal-01975102 , version 1

Citer

Antoine Miech, Ivan Laptev, Josef Sivic. Learning a Text-Video Embedding from Incomplete and Heterogeneous Data. 2019. ⟨hal-01975102⟩
157 Consultations
308 Téléchargements

Partager

Gmail Facebook X LinkedIn More