DOI QR코드

DOI QR Code

Performance Improvement of SRGAN's Discriminator via Mutual Distillation

상호증류를 통한 SRGAN 판별자의 성능 개선

  • Yeojin Lee (Division of Electronics and Communications Engineering, Pukyong National University) ;
  • Hanhoon Park (Division of Electronics and Communications Engineering, Pukyong National University)
  • 이여진 (부경대학교 전자정보통신공학부) ;
  • 박한훈 (부경대학교 전자정보통신공학부)
  • Received : 2022.09.06
  • Accepted : 2022.09.29
  • Published : 2022.09.30

Abstract

Mutual distillation is a knowledge distillation method that guides a cohort of neural networks to learn cooperatively by transferring knowledge between them, without the help of a teacher network. This paper aims to confirm whether mutual distillation is also applicable to super-resolution networks. To this regard, we conduct experiments to apply mutual distillation to the discriminators of SRGANs and analyze the effect of mutual distillation on improving SRGAN's performance. As a result of the experiment, it was confirmed that SRGANs whose discriminators shared their knowledge through mutual distillation can produce super-resolution images enhanced in both quantitative and qualitative qualities.

상호증류는 교사 네트워크 도움 없이 다수의 네트워크 사이에 지식을 전달함으로써 협력적으로 학습하도록 유도하는 지식증류 방법이다. 본 논문은 상호증류가 초해상화 네트워크에도 적용 가능한지 확인하는 것을 목표로 한다. 이를 위해 상호증류를 SRGAN의 판별자에 적용하는 실험을 수행하고, 상호증류가 SRGAN의 성능 향상에 미치는 영향을 분석한다. 실험 결과, 상호증류를 통해 판별자의 지식을 공유한 SRGAN은 정량적, 정성적 화질이 개선된 초해상화 영상을 생성하였다.

Keywords

Acknowledgement

본 연구는 산업통상자원부와 한국산업기술진흥원의 "지역혁신클러스터육성사업(R&D, P0004797)"으로 수행된 연구결과 입니다.

References

  1. C. Ledig, et al., "Photo-realistic single image superresolution using a generative adversarial network," Proc. of CVPR, pp. 105-114, 2017.
  2. G. Hinton, et al., "Distilling the knowledge in a neural network," Proc. of NIPS, 2014.
  3. Y. Zhang, et al.,"Deep mutual learning," Proc. of CVPR, pp. 4320-4328, 2018.
  4. A. Romero, et al., "FitNets: hints for thin deep nets," Proc. of ICLR, 2015.
  5. N. Komodakis and S. Zagoruyko, "Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer," Proc. of ICLR, 2017.
  6. L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, "Be your own teacher: improve the performance of convolutional neural networks via self distillation," Proc. of ICCV, pp. 3713-3722, 2019.
  7. Z. He, et al., "Fakd: feature-affinity based knowledge distillation for efficient image super-resolution," Proc. of ICIP, pp. 518-522, 2020.
  8. C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," Proc. of ECCV, pp. 184-199, 2014.
  9. I. J. Goodfellow, et al., "Generative adversarial networks," arXiv preprint arXiv:1406.2661, 2014.
  10. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint, arXiv:1409.1556, 2014.
  11. X. Wang, et al., "ESRGAN: enhanced super resolution generative adversarial networks," Proc. of ECCV, pp. 63-79, 2018.
  12. Y. Choi and H. Park, "Improving ESRGAN with an additional image quality loss," Multimedia Tools and Applications, 2022.
  13. E. Agustsson and R. Timofte, "Ntire 2017 challenge on single image super-resolution: dataset and study," Proc. of CVPRW, pp. 126-135, 2017.
  14. M. Bevilacqua, et al., "Low complexity single image super-resolution based on nonnegative neighbor embedding," Proc. BMVC, 2012.
  15. R. Zeyde, et al., "On single image scale-up using sparse-representations," Proc. of Int. Conf. on Curves and Surfaces, pp. 711-730, 2010.
  16. D. Martin, et al., "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics," Proc. ICCV, vol. 2, pp. 416-423, 2001.
  17. J.-B. Huang, et al., "Single image super-resolution from transformed self-exemplars," Proc. of CVPR, pp. 5197-5206, 2015.
  18. C. Ma, et al., "Learning a no-Reference quality metric for single-image super-resolution," CVIU, vol. 158, pp. 1-16, 2017.
  19. A. Mittal, et al., "Making a completely blind image quality analyzer," IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2013. https://doi.org/10.1109/LSP.2012.2227726