Abstract

Real-world single image Super-Resolution (SR) aims to
enhance the resolution and reconstruct High-Resolution
(HR) details of real Low-Resolution (LR) images. This is
different from the traditional SR setting, where the LR im-
ages are synthetically created, typically with bicubic down-
sampling. As the degradation process for real-world LR
images are highly complex, SR of such images is much
more challenging. Recent promising approaches to solve
the Real-World Super-Resolution (RWSR) problem include
the use of domain adaptation to create realistic training-
pairs, and self-learning based methods which learn an im-
age specific SR model at test time. However, as domain
adaptation is an inherently challenging problem in itself,
SR models based solely on this approach are limited by the
domain gap. In contrast, while self-learning based meth-
ods remove the need for paired-training data by utilizing
internal information in the LR image, these methods come
with the cost of slow prediction times. This paper proposes
a novel framework, Semantic Segmentation Guided Real-
World Super-Resolution (SSG-RWSR), which uses an auxil-
iary semantic segmentation network to guide the SR learn-
ing. This results in noise-free reconstructions with accurate
object boundaries, and enables training on real LR images.
The latter allows our SR network to adapt to the image
specific degradations, without Ground-Truth (GT) reference
images. We support the guidance with domain adaptation
to faithfully reconstruct realistic textures, and ensure color
consistency. We evaluate our proposed method on two pub-
lic available datasets, and present State-of-the-Art results
in terms of perceptual image quality on both real and syn-
thesized LR images.
Original languageEnglish
Title of host publicationWinter Conference on Applications of Computer Vision
Publication statusAccepted/In press - 2021

Fingerprint

Dive into the research topics of 'Semantic Segmentation Guided Real-World Super-Resolution'. Together they form a unique fingerprint.

Cite this