Acknowledgement
본 연구는 2023 제주산학융합원 Project Lab 지원사업의 연구결과로 수행되었음.
References
- T. H. Cho, "Deep-learning for everyone," [KIIP] Seoul: Gilbut, pp. 1-308, 2017.
- Tero Karras, and SamuliLaine, and Timo Aila, "A style-Based Generator Architecture for Generative Adversarial Networks," arxiv:1812.04948
- NVlabs, "Flickr-Faces-HQ Dataset(FFHQ)," https://github.com/NVlabs/ffhq-dataset
- T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen and T. Aila, "Analyzing and Improving the Image Quality of StyleGAN", IEEE/CVF&CVPR, pp. 8107- 8116, June 2020. https://doi.org/10.1109/CVPR42600. 2020.00813
- Richardson, Elad, et al. "Encoding in style: a style gan encoder for image-to-image translation," In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. p. 2287-2296, 2021.