Single Cross-domain Semantic Guidance Network for Multimodal Unsupervised Image Translation

Publication Date

1-1-2023

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

13833 LNCS

DOI

10.1007/978-3-031-27077-2_13

First Page

165

Last Page

177

Abstract

Multimodal image-to-image translation has received great attention due to its flexibility and practicality. The existing methods lack the generality of effective style representation, and cannot capture different levels of stylistic semantic information from cross-domain images. Besides, they ignore the parallelism for cross-domain image generation, and their generator can only be responsible for specific domains. To address these issues, we propose a novel Single Cross-domain Semantic Guidance Network (SCSG-Net) for coarse-to-fine semantically controllable multimodal image translation. Images from different domains are mapped to a unified visual semantic latent space by a dual sparse feature pyramid encoder, and then the generative module generates the result images by extracting semantic style representation from the input images in a self-supervised manner guided by adaptive discrimination. Especially, our SCSG-Net meets the needs of users in different styles as well as diverse scenarios. Extensive experiments on different benchmark datasets show that our method can outperform other state-of-the-art methods both quantitatively and qualitatively.

Funding Number

2020B1212060069

Funding Sponsor

Department of Natural Resources of Guangdong Province

Keywords

Multimodal image translation, Semantic guidance, Unsupervised learning

Department

Industrial and Systems Engineering

Share

COinS