Domain adaptive image-to-image translation

Ying Cong Chen, Xiaogang Xu, Jiaya Jia

Research output: Contribution to journalConference article published in journalpeer-review

29 Citations (Scopus)

Abstract

Unpaired image-to-image translation (I2I) has achieved great success in various applications. However, its generalization capacity is still an open question. In this paper, we show that existing I2I models do not generalize well for samples outside the training domain. The cause is twofold. First, an I2I model may not work well when testing samples are beyond its valid input domain. Second, results could be unreliable if the expected output is far from what the model is trained. To deal with these issues, we propose the Domain Adaptive Image-To-Image translation (DAI2I) framework that adapts an I2I model for out-of-domain samples. Our framework introduces two sub-modules – one maps testing samples to the valid input domain of the I2I model, and the other transforms the output of I2I model to expected results. Extensive experiments manifest that our framework improves the capacity of existing I2I models, allowing them to handle samples that are distinctively different from their primary targets.

Original languageEnglish
Article number9156656
Pages (from-to)5273-5282
Number of pages10
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
Publication statusPublished - 2020
Externally publishedYes
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: 14 Jun 202019 Jun 2020

Bibliographical note

Publisher Copyright:
©2020 IEEE.

Fingerprint

Dive into the research topics of 'Domain adaptive image-to-image translation'. Together they form a unique fingerprint.

Cite this