과제정보
이 성과는 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (NRF-2019R1A4A1029800).
참고문헌
- He, Zhengyu. "Deep Learning in Image Classification: A Survey Report." 2020 2nd International Conference on Information Technology and Computer Application (ITCA). IEEE, pp. 174-177. 2020
- Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems 25. pp. 1097-1105. 2012.
- He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778. 2016.
- GAO, DUAN & Li, Xiao & Dong, Yue & Peers, Pieter & Xu, Kun & Tong, Xin. Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Transactions on Graphics. 38. pp.1-15. 2019. 10.1145/3306346.3323042
- Deschaintre, Valentin et al. "Single-image SVBRDF capture with a rendering-aware deep network." ACM Transactions on Graphics (TOG) 37. pp 1 - 15. 2018. https://doi.org/10.1145/3197517.3201378
- Kampouris, Christos, et al. "Fine-grained material classification using micro-geometry and reflectance." European Conference on Computer Vision. Springer, Cham, pp.778-792. 2016.
- Hyeongil Nam, and Jong-Il Park. " Normal map generation based on Pix2Pix for rendering fabric image", Proceedings of the Korean Society of Broadcast Engineers Conference. The Korean Society of Broadcast and Media Engineers. pp. 166-169. 2020.
- Isola, Phillip et al. "Image-to-Image Translation with Conditional Adversarial Networks." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5967-5976. 2016.
- Oktay, Ozan, et al. "Attention u-net: Learning where to look for the pancreas." arXiv preprint arXiv:1804.03999 2018.
- So-hyun Lim and Jun-chul Chun " Image-to-Image Translation Based on U-Net with R2 and Attention" J. Internet Comput. Serv. 21.4. pp 9-16.2020 https://doi.org/10.7472/JKSII.2020.21.4.9
- Zuo, Qiang, Songyu Chen, and Zhifang Wang. "R2AU-Net: Attention Recurrent Residual Convolutional Neural Network for Multimodal Medical Image Segmentation." Security and Communication Networks 2021. 2021.
- Li, Yanchun, Nanfeng Xiao, and Wanli Ouyang. "Improved generative adversarial networks with reconstruction loss." Neurocomputing. 323. pp. 363-372. 2019. https://doi.org/10.1016/j.neucom.2018.10.014
- Shi, Haoyue, et al. "Loss Functions for Person Image Generation." BMVC. 2020.
- Ding, Keyan, et al. "Image quality assessment: Unifying structure and texture similarity." arXiv preprint arXiv:2004.07728. 2020.
- Huang, Yanping, et al. "Gpipe: Efficient training of giant neural networks using pipeline parallelism." Advances in neural information processing systems 32. pp.103-112. 2019.
- Pixar One Twenty Eight by Pixar Animation Studios, https://renderman.pixar.com/
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. "Image quality assessment: from error visibility to structural similarity". IEEE transactions on image processing, 13(4):600, 2004. https://doi.org/10.1109/TIP.2003.819861
- ZHANG, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric". In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586-595. 2018.
- Bell, Sean, et al. "Material recognition in the wild with the materials in context database." Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479-3487. 2015.
- L. Sharan, R. Rosenholtz, and E. H. Adelson, "Accuracy and speed of material categorization in real-world images", Journal of Vision, vol. 14, no. 9, article 12, 2014
- Wang, Fei, et al. "Residual attention network for image classification." Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3156-3164. 2017.
- Xue, Jia, Hang Zhang, and Kristin Dana. "Deep texture manifold for ground terrain recognition." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 558-567. 2018.
- Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929. 2020.
- Chen, Chun-Fu, Quanfu Fan, and Rameswar Panda. "Crossvit: Cross-attention multi-scale vision transformer for image classification." arXiv preprint arXiv:2103.14899. 2021.