과제정보
본 연구는 문화체육관광부 및 한국콘텐츠진흥원의 연구개발지원사업(과제명: 문화유산 디지털 표준 선도를 위한 지능형 헤리티지 공유 플랫폼 기술개발, 과제번호:RS-2023-00219579)과 2024년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임 (No.2021-0-01341, 인공지능대학원지원(중앙대학교))
참고문헌
- 박동진, 김진경, 박물관 디지털 콘텐츠 품질 평가 모형에 관한 연구, 기업경영리뷰 11(4), p233-246, 공주대학교 KNU 기업경영연구소, (2020). https://doi.org/10.20434/KRICM.2020.11.11.4.233
- 조영훈, 송형록, 이승은, 성덕대왕신종의 3 차원 디지털 기록화 의미와 모니터링 기초자료 구축, 박물관 보존과학 24, p55-74, 국립중앙박물관, (2020). https://doi.org/10.22790/CONSERVATION.2020.24.0055
- 이현민, 김미수, 온라인 교육을 위한 디지털콘텐츠 활용 수업 개발 및 운영 사례-가상박물관을 중심으로, 교양교육연구, 14(4), p81-96, 한국교양교육학회, (2020). https://doi.org/10.34163/jkits.2019.14.1.009
- 이홍식, 유재형, 이권준, 양석진, 미디어파사드 상영 시 경천사지 십층석탑에 미치는 영향 조사 연구, 박물관 보존과학, 28, p51-64, 국립중앙박물관, (2022). https://doi.org/10.22790/CONSERVATION.2022.28.0051
- 안소린, 조영훈, 360 도 파노라마 이미지 기반 발굴 유적의 가상현실 콘텐츠 구축, 문화재 과학기술, 15(1), p93-99, 공주대학교 문화재보존과학연구소, (2020). https://doi.org/10.37563/SECH.15.1.10
- L. A. Gatys, A. S. Ecker, and M. Bethge, Image style transfer using convolutional neural networks, Proceedings of the IEEE conference on computer vision and pattern recognition, p2414-2423, IEEE, (2016).
- X. Huang, and S. Belongie, Arbitrary style transfer in real-time with adaptive instance normalization, Proceedings of the IEEE international conference on computer vision, p1501-1510, IEEE, (2017).
- Y. Deng, F. Tang, W. Dong, C. Ma, X. Pan, L. Wang, C. Xu, Stytr2: Image style transfer with transformers, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.11326-11336, IEEE, (2022).
- P. Chandran, G. Zoss, P. Gotardo, M. Gross, D. Bradley, Adaptive convolutions for structure-aware style transfer, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, p7972-7981, IEEE, (2021).
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, Advances in neural information processing systems, 30, p1-15, Computation and Language, (2017).
- S. Liu, J. Ye, X. Wang, Any-to-any style transfer, arXiv preprint arXiv:2304.09728, (2023).
- A.Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W. Lo, P. Dollar, R. Girshic, Segment anything, arXiv preprint arXiv:2304.02643, (2023).
- E. Hoffer, N. Ailon,Deep metric learning using triplet network, Similarity-Based Pattern Recognition: Third International Workshop, p12-14, SIMBAD, (2015).
- I. Goodfellow, Y. Bengio, A Courville, Deep learning, p583-615, MIT press, Cambridge, (2016).
- F. Schroff, D. Kalenichenko, J. Philbin, Facenet: A unified embedding for face recognition and clustering, In Proceedings of the IEEE conference on computer vision and pattern recognition, p815-823, IEEE, (2015).
- G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531., (2015).
- 시종욱, GAN 기반의 한국초상화에서 증명사진으로 스타일 전이, 금오공과대학교 대학원 컴퓨터공학과, 석사학위논문, p21-22, (2022).
- AI at Meta, SA-1B Dataset, https://ai.meta.com/datasets/segment-anything/, (2023).
- K. He, X. Chen, S. Xie, Y. Li, P. Dollar, R. Girshick, Masked autoencoders are scalable vision learners, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, p16000-16009, IEEE, (2022).
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, N. Houlsby, An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929., (2020).
- T. Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, In Proceedings of the IEEE international conference on computer vision, p2980-2988, IEEE, (2017).
- N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-end object detection with transformers, In European conference on computer vision, p213-229, Cham: Springer International Publishing, (2020).
- R. E. Woods, R. C. Gonzalez, Digital image processing, p133-152, Pearson, London, (2008).
- N., Dalal, B. Triggs, Histograms of oriented gradients for human detection, In 2005 IEEE computer society conference on computer vision and pattern recognition, 1, p886-893, IEEE, (2005).
- A. R. Lahitani, A. E. Permanasari, N. A. Setiawan, Cosine similarity to determine similarity measure: Study case in online essay assessment, In 2016 4th International Conference on Cyber and IT Service Management, 38(6), p1-6, IEEE, (2016).
- T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, Improved techniques for training gans, Advances in neural information processing systems, p1-9, (2016).
- 윤동식, 최상욱, 노성혁, 곽노윤, Convolutional Neural Networks 와 GrabCut 을 이용한 다중 스타일 전이 방법, 한국통신학회 학술대회논문집, p962-963, 한국통신학회, (2022).