DOI QR코드

DOI QR Code

Super Resolution Fusion Scheme for General- and Face Dataset

범용 데이터 셋과 얼굴 데이터 셋에 대한 초해상도 융합 기법

  • Mun, Jun Won (Dept. of Electrical and Electronic Eng., Graduate School, Yonsei University) ;
  • Kim, Jae Seok (Dept. of Electrical and Electronic Eng., Graduate School, Yonsei University)
  • Received : 2019.08.13
  • Accepted : 2019.10.28
  • Published : 2019.11.30

Abstract

Super resolution technique aims to convert a low-resolution image with coarse details to a corresponding high-resolution image with refined details. In the past decades, the performance is greatly improved due to progress of deep learning models. However, universal solution for various objects is a still challenging issue. We observe that learning super resolution with a general dataset has poor performance on faces. In this paper, we propose a super resolution fusion scheme that works well for both general- and face datasets to achieve more universal solution. In addition, object-specific feature extractor is employed for better reconstruction performance. In our experiments, we compare our fusion image and super-resolved images from one- of the state-of-the-art deep learning models trained with DIV2K and FFHQ datasets. Quantitative and qualitative evaluates show that our fusion scheme successfully works well for both datasets. We expect our fusion scheme to be effective on other objects with poor performance and this will lead to universal solutions.

Keywords

References

  1. S. Anwar, S. Khan, and N. Barnes, "A Deep Journey into Super-resolution: A Survey," arXiv Preprint arXiv:1904.07523, 2019.
  2. D.H. Lee, H.S. Lee, K.J. Lee, and H.J. Lee, “Fast Very Deep Convolutional Neural Network with Deconvolution for Super-resolution,” Journal of Korea Multimedia Society, Vol. 20, No. 11, pp. 1750-1758, 2017. https://doi.org/10.9717/kmms.2017.20.11.1750
  3. C. Dong, C.C. Loy, K. He, and X. Tang, "Learning a Deep Convolutional Network for Image Super-resolution," Proceeding of European Conference on Computer Vision, pp. 184-199, 2014.
  4. J. Kim, J. Lee, and K. Lee, "Accurate Image Super-resolution Using Very Deep Convolutional Networks," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646-1654, 2016.
  5. B. Lim, S. Son, H. Kim, S. Nah, and K. Lee, "Enhanced Deep Residual Networks for Single Image Super-resolution," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136-144, 2017.
  6. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warder-Farley, S. Ozair, et al., "Generative Adversarial Nets," Advances in Neural Information Processing Systems, pp. 2672-2680, 2014.
  7. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, et al., "Photo-realistic Single Image Super-resolution Using a Generative Adversarial Network," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681-4690, 2017.
  8. X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, et al., "Esrgan: Enhanced Super-resolution Generative Adversarial Networks," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 63-79, 2018.
  9. A. Jolicoeur-Martineau, "The Relativistic Discriminator: A Key Element Missing from Standard GAN," arXiv Preprint arXiv:1807.00734, 2018.
  10. T Karras, S. Laine, and T. Aila, "A Style-Based Generator Architecture for Generative Adversarial Networks," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
  11. K. He, G. Gkioxari, P. Dollar, and R. Girshick, "Mask R-CNN," Proceeding of the IEEE Conference on Computer Vision, pp. 2961-2969, 2017.
  12. J. Redmon and A. Farharhadi, "Yolov3: An Incremental Improvement," arXiv Preprint arXiv:1804.02767, 2018.
  13. Oxford University, Information Engineering, http://www.robots.ox.ac.uk/-albanie/pytorchmodels.html (accessed July 22, 2019)
  14. E. Agustsson and R. Timofte, "Ntire 2017 Challenge on Single Image Super-resolution: Dataset and Study," Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 126-135, 2017.