- Antônio Luís Sombra de Medeiros
Recent years have witnessed unprecedented success of deep convolutional neural networks (CNNs) and Generative Adversarial Networks (GANs) applied in single image super-resolution (SISR) tasks. However, CNN-SISR methods often assume that the lower resolution (LR) image is downsampled bicubicly from its high resolution (HR) counterpart. It results in poor performance on images with degradations that do not follow this assumption. We propose a framework to learn a residual image super-resolver that handles multiple degradations, improving its perfomance on natural images. Our basic premise is that the residuals between an upsampled LR image and the HR counterpart contain information about the true degradation and downsampling processes, controlled for individual characteristics of the image. We show that learning residuals in image space leads to performance enhancement in many cases.
In this work, we apply different CNNs/GAN-based models to learn to predict the residual image given the LR image. The residual to be learned is obtained by subtracting a bicubicly upscaled image of the LR image from the true HR image. The LR images are generated by applying a random blur degradation to the HR image followed by a bicubic downsample. We also generate residuals from 3 different downsampling methods in LR image space dimensions to use as features. Finally, we show that our method is able to learn the spatially upsampled higher dimensional residuals and we are able to recover detailed HR images from bicubicly upsampled LR images by adding our generated high resolution residual error.
*Texto enviado pelo aluno.
Membros da banca:
- Eduardo Fonseca Mendes (orientador) - FGV/EMAp
- Eduardo Antônio Barros da Silva - UFRJ
- Raul Queiroz Feitosa - PUC