xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation

Tuan-Hung Vu
  • Fonction : Auteur
Raoul de Charette
Emilie Wirbel
  • Fonction : Auteur
  • PersonId : 960791
Patrick Pérez
  • Fonction : Auteur
  • PersonId : 1022281

Résumé

Unsupervised Domain Adaptation (UDA) is crucial to tackle the lack of annotations in a new domain. There are many multi-modal datasets, but most UDA approaches are uni-modal. In this work, we explore how to learn from multi-modality and propose cross-modal UDA (xMUDA) where we assume the presence of 2D images and 3D point clouds for 3D semantic segmentation. This is challenging as the two input spaces are heterogeneous and can be impacted differently by domain shift. In xMUDA, modalities learn from each other through mutual mimicking, disentangled from the segmentation objective, to prevent the stronger modality from adopting false predictions from the weaker one. We evaluate on new UDA scenarios including day-to-night, country-to-country and dataset-to-dataset, leveraging recent autonomous driving datasets. xMUDA brings large improvements over uni-modal UDA on all tested scenarios, and is complementary to state-of-the-art UDA techniques.

Dates et versions

hal-02388974 , version 1 (02-12-2019)

Identifiants

Citer

Maximilian Jaritz, Tuan-Hung Vu, Raoul de Charette, Emilie Wirbel, Patrick Pérez. xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation. Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2020, Virtual, United States. ⟨hal-02388974⟩

Collections

INRIA INRIA2
125 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More