Abstract
Semantic image manipulation (SIM) aims to generate realistic images from an input source image and a target text description, such that the generated images not only match the content of the description, but also maintain text-irrelevant features of the source image. It requires to learn a good mapping between visual features and linguistic features. Previous works on SIM can only generate images of limited resolution that typically lack of fine and clear details. In this work, we aim to generate high-resolution photo-realistic images for SIM. Specifically, we propose SIMGAN, a generative adversarial networks (GAN) based architecture that is capable of generating images of size 256 × 256 for SIM. We demonstrate the effectiveness of SIMGAN and its superiority over existing methods via qualitative and quantitative evaluation on Caltech-200 and Oxford-102 datasets.
| Original language | English |
|---|---|
| Title of host publication | 2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings |
| Publisher | IEEE Computer Society |
| Pages | 734-738 |
| Number of pages | 5 |
| ISBN (Electronic) | 9781538662496 |
| DOIs | |
| Publication status | Published - Sept 2019 |
| Externally published | Yes |
| Event | 26th IEEE International Conference on Image Processing, ICIP 2019 - Taipei, Taiwan, Province of China Duration: 22 Sept 2019 → 25 Sept 2019 |
Publication series
| Name | Proceedings - International Conference on Image Processing, ICIP |
|---|---|
| Volume | 2019-September |
| ISSN (Print) | 1522-4880 |
Conference
| Conference | 26th IEEE International Conference on Image Processing, ICIP 2019 |
|---|---|
| Country/Territory | Taiwan, Province of China |
| City | Taipei |
| Period | 22/09/19 → 25/09/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Keywords
- adversarial learning
- generative model
- image generation
- semantic image manipulation