Leveraging Local Domains for Image-to-Image Translation - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Leveraging Local Domains for Image-to-Image Translation

Résumé

Image-to-image (i2i) networks struggle to capture local changes because they do not affect the global scene structure. For example, translating from highway scenes to offroad, i2i networks easily focus on global color features but ignore obvious traits for humans like the absence of lane markings. In this paper, we leverage human knowledge about spatial domain characteristics which we refer to as 'local domains' and demonstrate its benefit for image-to-image translation. Relying on a simple geometrical guidance, we train a patch-based GAN on few source data and hallucinate a new unseen domain which subsequently eases transfer learning to target. We experiment on three tasks ranging from unstructured environments to adverse weather. Our comprehensive evaluation setting shows we are able to generate realistic translations, with minimal priors, and training only on a few images. Furthermore, when trained on our translations images we show that all tested proxy tasks are significantly improved, without ever seeing target domain at training.

Dates et versions

hal-03498133 , version 1 (20-12-2021)

Identifiants

Citer

Anthony Dell'Eva, Fabio Pizzati, Massimo Bertozzi, Raoul de Charette. Leveraging Local Domains for Image-to-Image Translation. International Conference on Computer Vision Theory and Applications (VISAPP), Feb 2022, Lisbon, Portugal. ⟨hal-03498133⟩

Collections

INRIA INRIA2
27 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More