Fig. 1. Generative adversarial network
Fig. 2. Architecture of GAN’s variants
Fig. 3. Generator model of DCGAN[10]
Fig. 4. Training images of DCGAN
Fig. 5. Generated army tank images
Fig. 6. Generated aircraft carrier images
Fig. 7. Training images of CycleGAN
Fig. 8. Training architecture of CycleGAN
Fig. 9. Image-to-image translation results
Fig. 10. Concept diagram for synthetic data generation and its applications
Table 1. Loss functions of GAN’s variants
References
- J. Tremblay et al., "Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization," Computer Vision and Pattern Recognition(CVPR), 2018.
- Z. Zheng et al., "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro," International Conference on Computer Vision(ICCV), 2017.
- M. V. Giuffrida et al., "ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network," International Conference on Computer Vision(ICCV) CVPPP Workshop, 2017.
- H. Yang et al., "Unsupervised Learning Based GANs Technology Review," KIMST Annual Conference Proceedings, 2017.
- H. Yang et al., "GANs Based Machine Learning Training Image Generation," KIMST Annual Conference Proceedings, 2017.
- H. Yang et al., "Visible-to-Infrared Image Translation using Adversarial Training," KIMST Annual Conference Proceedings, 2018.
- H. Yang et al., "Cross-domain Image Translation for Large Shape Transformation using Generative Adversarial Networks," Korea Computer Congress (KCC), 2018.
- H. Yang et al., "GANs Technology and Big Data Applications for Defense," Defense Science & Technology Plus, Vol. 239, 2018.
- I. Goodfellow et al., "Generative Adversarial Network s," arXiv:1406.2661.
- A. Radford et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks," arXiv:1511.06434.
- M. Arjovsky et al., "Wasserstein GAN," arXiv:1701.07875.
- X. Mao et al., "Least Squares Generative Adversarial Networks," arXiv:1611.04076.
- J. Donahue et al., "Adversarial Feature Learning," arXiv:1605.09782.
- X. Chen et al., "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets," arXiv:1606.03657.
- A. Spurr et al., "Guiding InfoGAN with Semi-Supervision," arXiv:17.07.04487.
- A. Odena, "Semi-Supervised Learning with Generative Adversarial Networks," arXiv:1606.01583.
- M. Mirza et al., "Conditional Generative Adversarial Nets," arXiv:1411.1784.
- A. Odena et al., "Conditional Image Synthesis with Auxiliary Classifier GANs," arXiv:1610.09585.
- J. Zhao et al., "Energy-Based Generative Adversarial Networks," arXiv:1609.03126.
- D. Berthelot et al., "BEGAN: Boundary Equilibrium Generative Adversarial Networks," arXiv:1703.10717.
- O. Russakovsky et al., "ImageNet Large Scale Visual Recognition Challenge," International Journal of Computer Vision(IJCV), 2015.
- J. Zhu, T. Park, P. Isola, A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," International Conference on Computer Vision(ICCV), 2017.
- J. Johnson, et al., "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," European Conference on Computer Vision(ECCV), 2016.
- C. Ledig, et al., "Photo-realistic Single Image Super-Resolution using a Generative Adversarial Network," Computer Vision and Pattern Recognition(CVPR), 2017.
- D. Chung et al., "Getting the Most Out of Multi-GPU on Inference Stage using Hadoop-Spark Cluster," GPU Technology Conference(GTC), 2018.
- E. Agustsson et al., "NTIRE 2017 Challenge on Singe Image Super-Resolution: Dataset and Study," CVPR Workshop, 2017.