DOI QR코드

DOI QR Code

Mitigating Mode Collapse using Multiple GANs Training System

모드 붕괴를 완화하기 위한 다중 GANs 훈련 시스템

  • 심주용 (고려대학교 정보통신기술연구소) ;
  • 최진성 (고려대학교 전기전자공학과) ;
  • 김종국 (고려대학교 전기전자공학과)
  • Received : 2024.08.30
  • Accepted : 2024.09.22
  • Published : 2024.10.31

Abstract

Generative Adversarial Networks (GANs) are typically described as a two-player game between a generator and a discriminator, where the generator aims to produce realistic data, and the discriminator tries to distinguish between real and generated data. However, this setup often leads to mode collapse, where the generator produces limited variations in the data, failing to capture the full range of the target data distribution. This paper proposes a new training system to mitigate the mode collapse problem. Specifically, it extends the traditional two-player game of GANs into a multi-player game and introduces a peer-evaluation method to effectively train multiple GANs. In the peer-evaluation process, the generated samples from each GANs are evaluated by the other players. This provides external feedback, serving as an additional standard that helps GANs recognize mode failure. This cooperative yet competitive training method encourages the generators to explore and capture a broader range of the data distribution, mitigating mode collapse problem. This paper explains the detailed algorithm for peer-evaluation based multi-GANs training and validates the performance through experiments.

생성형 적대 신경망(GANs)은 보통 생성자와 판별자 사이의 두 플레이어 게임으로 설명된다. 여기서 생성자는 실제에 가까운 데이터를 생성하는 것을 목표로 하고, 판별자는 실제 데이터와 생성된 데이터를 구별하려고 한다. 하지만 이 방식은 종종 생성자가 데이터를 제한적으로 생성하여 데이터 분포의 다양성을 제대로 포착하지 못하는 모드 붕괴(mode collapse)로 이어질 수 있다, 이 논문에서는 이러한 모드 붕괴 문제를 완화하기 위한 새로운 훈련 시스템을 제안한다. 구체적으로, 기존의 이중 플레이어 게임을 다중 플레이어 게임으로 확장하고, 여러 GANs를 효과적으로 훈련시키기 위해 동료 평가(peer-evaluation) 방법을 제안한다. 동료 평가 과정에서는 각 GAN이 생성한 샘플들을 다른 플레이어들이 평가한다. 이는 외부 피드백을 제공하여 GAN이 모드 붕괴를 인식할 수 있는 추가적인 기준이 된다. 이러한 동료 평가 방법을 적용한 협력적이면서도 경쟁적인 다중 플레이어 게임 방식의 훈련은 생성자들이 데이터 분포의 더 넓은 범위를 탐색하고 포착하도록 돕는다. 이 논문에서는 여러 GANs를 효과적으로 훈련시키기 위한 알고리즘을 자세히 소개하고, 실험을 통해 그 성능을 검증한다.

Keywords

References

  1. A. Ramesh et al., "Zero-shot text-to-image generation," in Proceedings of the Advances in Neural Information Processing Systems, pp.8821-8831, 2021.
  2. M. Tao, B.-K. Bao, H. Tang, and C. Xu, "Galip: Generative adversarial clips for text-to-image synthesis," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
  3. H. Alqahtani, M. Kavakli-Thorne, and G. Kumar, "Applications of generative adversarial networks (gans): An updated review," Archives of Computational Methods in Engineering, Vol.28, pp.525-552, 2021.
  4. Z. Xiao, K. Kreis, and A. Vahdat, "Tackling the generative learning trilemma with denoising diffusion gans," in Proceedings of the International Conference on Learning Representations, 2022.
  5. I. Goodfellow et al., "Generative adversarial nets," in Proceedings of the Advances in Neural Information Processing Systems, Vol.27, pp.2672-2680, 2014.
  6. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training gans," in Proceedings of the Advances in Neural Information Processing Systems, Vol.29, 2016.
  7. J. Y. Shim, J. S. B. Choe, and J.-K. Kim, ''A study on auction-inspired multi-gan training,'' in Annual Spring Conference of KIPS, Vol.30, No.1, pp.527-529, 2023.
  8. J. Y. Shim, ''Enhancements and applications of gans: Cross-modal generation, captcha system and mode collapse problem,'' Ph.D. Dissertation, Department of Electrical and Computer Engineering, Korea University, 2024.
  9. L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein, "Unrolled generative adversarial networks," arXiv preprint arXiv: 1611.02163, 2016.
  10. M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative adversarial networks," in Proceedings of the International Conference on Machine Learning, pp.214-223, 2017.
  11. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of wasserstein gans," in Proceedings of the Advances in Neural Information Processing Systems, Vol.30, pp.5767-5777, 2017.
  12. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE International Conference on Computer Vision, pp.2794-2802, 2017.
  13. X. Chen, Y. Duan, R. Houthooft, J. Schulman,I. Sutskever, and P. Abbeel, "Infogan: Interpretable representation learning by information maximizing generative adversarial nets," in Proceedings of the Advances in Neural Information Processing Systems, Vol.29, 2016.
  14. T. Che, Y. Li, A. Jacob, Y. Bengio, and W. Li, "Mode regularized generative adversarial networks," in Proceedings of the International Conference on Learning Representations, 2017.
  15. M. Mohebbi Moghaddam et al., "Games of gans: game-theoretical models for generative adversarial networks," Artificial Intelligence Review, pp.1-37, 2023.
  16. A. Ghosh, V. Kulharia, V. P. Namboodiri, P. H. Torr, and P. K. Dokania, "Multi-agent diverse generative adverarial networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.8513-8521, 2018.
  17. Q. Hoang, T. D. Nguyen, T. Le, and D. Phung, "MGAN: Training generative adversarial nets with multiple generators," in International Conference on Learning Representations, 2018.
  18. T. Nguyen, T. Le, H. Vu, and D. Phung, "Dual discriminator generative adversarial nets," in Proceedings of the Advances in Neural Information Processing Systems, Vol.30, 2017.
  19. J. Choi and B. Han, "Mcl-gan: Generative adversarial networks with multiple specialized discriminators," in Proceedings of the Advances in Neural Information Processing Systems, Vol.35, pp.29 597-29 609, 2022.
  20. I. Albuquerque, J. Monteiro, T. Doan, B. Considine, T. Falk, and I. Mitliagkas, "Multi-objective training of generative adversarial networks with multiple discriminators," in Proceedings of the Advances in Neural Information Processing Systems, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., Vol. 97. PMLR, 09-15 Jun. pp.202-211, 2019.