DOI QR코드

DOI QR Code

FAST-ADAM in Semi-Supervised Generative Adversarial Networks

  • Kun, Li (Department of Computer Engineering, Dongseo University) ;
  • Kang, Dae-Ki (Department of Computer Engineering, Dongseo University)
  • Received : 2019.08.24
  • Accepted : 2019.09.05
  • Published : 2019.11.30

Abstract

Unsupervised neural networks have not caught enough attention until Generative Adversarial Network (GAN) was proposed. By using both the generator and discriminator networks, GAN can extract the main characteristic of the original dataset and produce new data with similarlatent statistics. However, researchers understand fully that training GAN is not easy because of its unstable condition. The discriminator usually performs too good when helping the generator to learn statistics of the training datasets. Thus, the generated data is not compelling. Various research have focused on how to improve the stability and classification accuracy of GAN. However, few studies delve into how to improve the training efficiency and to save training time. In this paper, we propose a novel optimizer, named FAST-ADAM, which integrates the Lookahead to ADAM optimizer to train the generator of a semi-supervised generative adversarial network (SSGAN). We experiment to assess the feasibility and performance of our optimizer using Canadian Institute For Advanced Research - 10 (CIFAR-10) benchmark dataset. From the experiment results, we show that FAST-ADAM can help the generator to reach convergence faster than the original ADAM while maintaining comparable training accuracy results.

Keywords

References

  1. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Nets." in Proc. Neural Information Processing Systems 2014, pp. 2672-2680, Dec. 8-13, 2014.
  2. D.P. Kingma, and J. Ba, "Adam: A method for stochastic optimization," in Proc. 3rd International Conference for Learning Representations, May 7-9, 2015.
  3. M.R. Zhang, J. Lucas, G. Hinton, and J. Ba, "Lookahead Optimizer: k steps forward, 1 step back," in Proc. Neural Information Processing Systems 2019, Poster, Dec. 8-14, 2019.
  4. A. Odena, "Semi-supervised learning with generative adversarial networks," in Proc. 33rd International Conference on Machine Learning, Workshop Paper, June 19-24, 2016.
  5. M. Mirza, and S. Osindero, "Conditional generative adversarial nets," arXiv preprint arXiv:1411.1784, 2014
  6. A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2015.
  7. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training GANs," In in Proc. Neural Information Processing Systems 2016, pp. 2234-2242, Dec. 5-10, 2016.
  8. A. Krizhevsky, V. Nair, and G. Hinton, "The CIFAR-10 dataset," online: http://www.cs.toronto.edu/kriz/cifar.html, 2014.
  9. G. Agrawal, and D.-K. Kang, "Wine Quality Classification with Multilayer Perceptron," International Journal of Internet, Broadcasting and Communication (IJIBC), 10(2):25-30, May 2018. DOI: http://dx.doi.org/10.7236/IJIBC.2016.8.4.19
  10. Ho, J., and Kang, D.-K., "Ensemble-By-Session Method on Keystroke Dynamics based User Authentication," International Journal of Internet, Broadcasting and Communication (IJIBC), 8(4), November 2016. DOI: https://doi.org/10.7236/IJIBC.2018.10.2.5