Acknowledgement
본 연구는 과학기술정보통신부 및 정보통신기획평가원의 인공지능융합혁신인재양성사업(IITP-2023-RS-2023-00256629) 및대학ICT연구센터사업(IITP-2024-RS-2024-00437718)의 연구결과로 수행되었음
References
- Vaswani, A. "Attention is all you need. "Advances in Neural Information Processing Systems (2017).
- Sherstinsky, Alex. "Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network." Physica D: Nonlinear Phenomena 404 (2020): 132306.
- Dosovitskiy, Alexey. "An image is worth16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929 (2020).
- O'Shea, K. "An introduction to convolutional neural networks." arXiv preprint arXiv:1511.08458(2015).
- Wang, Benyou, et al. "Encoding word order in complex embeddings." arXiv preprint arXiv:1912.12333 (2019).
- Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. "Self-attention with relative position representations." arXiv preprint arXiv:1803.02155(2018).
- Le, Ya, and Xuan Yang. "Tiny imagenet visual recognition challenge." CS 231N 7.7 (2015):3.