Acknowledgement
이 논문은 2021년도 동서대학교 "Dongseo Cluster Project"지원에 의하여 이루어진 것임 (DSU-20210007)
References
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, A. Ozair, S., Courville, and Y. Bengio, "Generative adversarial nets," Report, 2014.
- Y. Jeong and G. Choi, "Efficient iris recognition using deep-learning convolution neural network(CNN)," J. of the Korea Institute of Electronic Communication Sciences, vol. 15, no. 3, 2020, pp. 521-526. https://doi.org/10.13067/JKIECS.2020.15.3.521
- J. Yoo, "A Design of Small Scale Deep CNN Model for Facial Expression Recognition using the Low Resolution Image Datasets," J. of the Korea Institute of Electronic Communication Sciences, vol. 16, no. 1, 2021, pp. 75-80. https://doi.org/10.13067/JKIECS.2021.16.1.75
- C. Donahue, J. Macauley, and M. Puckette. "Adversarial audio synthesis," Report, 2018.
- S. Huang, Q. Li, C. Anil, X. Bao, S. Oore, and R. Grosse, "Timbretron: A wavenet (cyclegan (cqt (audio))) pipeline for musical timbre transfer," Report, 2018.
- R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, "Deepfakes and beyond: A survey of face manipulation and fake detection," Information Fusion, vol. 64, 2020, pp. 131-148. https://doi.org/10.1016/j.inffus.2020.06.014
- A. Radford, L. Metz, and S. Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks," Report, 2015.
- P. Isola, J. Zhu, T. Zhou, and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, June 2017, pp. 5967-5976.
- T. Park, M. Liu, T. Wang, and J. Zhu, "Semantic image synthesis with spatially-adaptive normalization," In Proc. of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach USA. June 2019, pp. 2337-2346.
- W. Sun and T. Wu. "Image synthesis from reconfigurable layout and style," In Proc. of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, Oct. 2019.
- J. Y. Zhu, T. Park, P. Isola, and A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," In Proc. of the IEEE international conference on computer vision, Venice Italy, Oct. 2017, pp. 2223-2232.
- Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo, "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation," In Proc. of the IEEE conference on computer vision and pattern recognition, Salt Lake City, USA, June 2018, pp. 8789-8797.
- T. Karras, S. Laine, and T. Aila. "A style-based generator architecture for generative adversarial networks," In Proc. of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach USA. June 2019, pp. 4401-4410.
- L. A. Gatys, A. S. Ecker, and M. Bethge. "Image style transfer using convolutional neural networks," In Proc. of the IEEE conference on computer vision and pattern recognition, Las Vegas, USA, June 2016, pp.2414-2423.
- Bau, D., J. Zhu, H. Strobelt, B. Zhou, J. Tenenbaum, W. T. Freeman, and A. Torralba, "Gan dissection: Visualizing and understanding generative adversarial networks," Report, 2018.
- K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi, "Edgeconnect: Generative image inpainting with adversarial edge learning," Report, 2019.
- A. Telea. "An image inpainting technique based on the fast marching method," Journal of graphics tools, vol. 9, no. 1, 2004, pp. 23-34. https://doi.org/10.1080/10867651.2004.10487596
- M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, "Image inpainting," In Proc. of the 27th annual conference on Computer graphics and interactive techniques, New Orleans, USA, July 2000, pp.417-424.