DOI QR코드

DOI QR Code

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구

  • Yang, Hunmin (Institute of Defense Advanced Research, Agency for Defense Development)
  • 양훈민 (국방과학연구소 국방고등기술원)
  • Received : 2018.09.19
  • Accepted : 2019.01.09
  • Published : 2019.02.05

Abstract

Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Keywords

GSGGBW_2019_v22n1_49_f0001.png 이미지

Fig. 1. Generative adversarial network

GSGGBW_2019_v22n1_49_f0002.png 이미지

Fig. 2. Architecture of GAN’s variants

GSGGBW_2019_v22n1_49_f0003.png 이미지

Fig. 3. Generator model of DCGAN[10]

GSGGBW_2019_v22n1_49_f0004.png 이미지

Fig. 4. Training images of DCGAN

GSGGBW_2019_v22n1_49_f0005.png 이미지

Fig. 5. Generated army tank images

GSGGBW_2019_v22n1_49_f0006.png 이미지

Fig. 6. Generated aircraft carrier images

GSGGBW_2019_v22n1_49_f0007.png 이미지

Fig. 7. Training images of CycleGAN

GSGGBW_2019_v22n1_49_f0008.png 이미지

Fig. 8. Training architecture of CycleGAN

GSGGBW_2019_v22n1_49_f0009.png 이미지

Fig. 9. Image-to-image translation results

GSGGBW_2019_v22n1_49_f0010.png 이미지

Fig. 10. Concept diagram for synthetic data generation and its applications

Table 1. Loss functions of GAN’s variants

GSGGBW_2019_v22n1_49_t0001.png 이미지

References

  1. J. Tremblay et al., "Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization," Computer Vision and Pattern Recognition(CVPR), 2018.
  2. Z. Zheng et al., "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro," International Conference on Computer Vision(ICCV), 2017.
  3. M. V. Giuffrida et al., "ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network," International Conference on Computer Vision(ICCV) CVPPP Workshop, 2017.
  4. H. Yang et al., "Unsupervised Learning Based GANs Technology Review," KIMST Annual Conference Proceedings, 2017.
  5. H. Yang et al., "GANs Based Machine Learning Training Image Generation," KIMST Annual Conference Proceedings, 2017.
  6. H. Yang et al., "Visible-to-Infrared Image Translation using Adversarial Training," KIMST Annual Conference Proceedings, 2018.
  7. H. Yang et al., "Cross-domain Image Translation for Large Shape Transformation using Generative Adversarial Networks," Korea Computer Congress (KCC), 2018.
  8. H. Yang et al., "GANs Technology and Big Data Applications for Defense," Defense Science & Technology Plus, Vol. 239, 2018.
  9. I. Goodfellow et al., "Generative Adversarial Network s," arXiv:1406.2661.
  10. A. Radford et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks," arXiv:1511.06434.
  11. M. Arjovsky et al., "Wasserstein GAN," arXiv:1701.07875.
  12. X. Mao et al., "Least Squares Generative Adversarial Networks," arXiv:1611.04076.
  13. J. Donahue et al., "Adversarial Feature Learning," arXiv:1605.09782.
  14. X. Chen et al., "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets," arXiv:1606.03657.
  15. A. Spurr et al., "Guiding InfoGAN with Semi-Supervision," arXiv:17.07.04487.
  16. A. Odena, "Semi-Supervised Learning with Generative Adversarial Networks," arXiv:1606.01583.
  17. M. Mirza et al., "Conditional Generative Adversarial Nets," arXiv:1411.1784.
  18. A. Odena et al., "Conditional Image Synthesis with Auxiliary Classifier GANs," arXiv:1610.09585.
  19. J. Zhao et al., "Energy-Based Generative Adversarial Networks," arXiv:1609.03126.
  20. D. Berthelot et al., "BEGAN: Boundary Equilibrium Generative Adversarial Networks," arXiv:1703.10717.
  21. O. Russakovsky et al., "ImageNet Large Scale Visual Recognition Challenge," International Journal of Computer Vision(IJCV), 2015.
  22. J. Zhu, T. Park, P. Isola, A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," International Conference on Computer Vision(ICCV), 2017.
  23. J. Johnson, et al., "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," European Conference on Computer Vision(ECCV), 2016.
  24. C. Ledig, et al., "Photo-realistic Single Image Super-Resolution using a Generative Adversarial Network," Computer Vision and Pattern Recognition(CVPR), 2017.
  25. D. Chung et al., "Getting the Most Out of Multi-GPU on Inference Stage using Hadoop-Spark Cluster," GPU Technology Conference(GTC), 2018.
  26. E. Agustsson et al., "NTIRE 2017 Challenge on Singe Image Super-Resolution: Dataset and Study," CVPR Workshop, 2017.