Acknowledgement
본 논문은 2022년, 2024년 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (No. 2022R1A4A5033271, No.RS-2024-00348476).
References
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," International Conference on Learning Representations, 2021.
- J. N. Chen, S. Sun, J. He, P. H. Torr, A. Yuille, S. Bai, "Transmix: Attend to Mix for Vision Transformers," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12135-12144, 2022.
- Q. Zhao, Y. Huang, W. Hu, F. Zhang, J. Liu, "Mixpro: Data Augmentation with Maskmix and Progressive Attention Labeling for Vision Transformer," arXiv preprint arXiv:2304.12043, 2023.
- H. Lee, H. Jung, "Recyclable Objects Detection Via Bounding Box CutMix and Standardized Distance-based IoU," IEMEK J. Embed. Sys. Appl., Vol. 17, No. 5, pp. 289-296, 2022.
- A. Bochkovskiy, C. Wang, H. M. Liao, "Yolov4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:2004.10934, 2020.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, "Generative Adversarial Nets," Advances in Neural Information Processing Systems, 27, 2014.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, "High-resolution Image Synthesis with Latent Diffusion Models," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695. 2022.
- Y. Zhou, H. Sahak, J. Ba, "Training on Thin Air: Improve Image Classification with Generated Data," arXiv preprint arXiv:2305.15316, 2023.
- S. Motamed, P. Rogalla, F. Khalvati, "Data Augmentation Using Generative Adversarial Networks (GANs) For GAN-Based Detection Of Pneumonia And COVID-19 In Chest X-Ray Images," Informatics in Medicine Unlocked, 27, 100779, 2021.
- B. Trabucco, K. Doherty, M. Gurinas, R. Salakhutdinov, "Effective Data Augmentation with Diffusion Models," arXiv preprint arXiv:2302.07944, 2023.
- E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, Q. V. Le, "Autoaugment: Learning Augmentation Policies from Data," arXiv preprint arXiv:1805.09501, 2018.
- Y. Zheng, Z. Zhang, S. Yan, M. Zhang, "Deep Autoaugment," arXiv preprint arXiv:2203.06172, 2022.
- S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, Y. Yoo, "Cutmix: Regularization Strategy to Train Strong Classifiers with Localizable Features," Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023-6032, 2019.
- H. Zhang, M. Cisse, Y. N. Dauphin, D. Lopez-Paz, "Mixup: Beyond Empirical Risk Minimization," arXiv preprint arXiv:1710.09412, 2017.
- E. D. Cubuk, B. Zoph, J. Shlens, Q. V. Le, "Randaugment: Practical Automated Data Augmentation with a Reduced Search Space," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702-703, 2020.
- K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
- S. Zagoruyko, N. Komodakis, "Wide Residual Networks," arXiv preprint arXiv:1605.07146, 2016.
- Y. Le, X. Yang, "Tiny Imagenet Visual Recognition Challenge," CS 231N, 7.7: 3, 2015.
- A. Krizhevsky, G. Hinton, "Learning Multiple Layers of Features from Tiny Images," 2009.
- T. DeVries, G. W. Taylor, "Improved Regularization of Convolutional Neural Networks with Cutout," arXiv preprint arXiv:1708.04552, 2017.
- V. Hosu, H. Lin, T. Sziranyi, D. Saupe, "KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment," IEEE Transactions on Image Processing, Vol. 29, pp. 4041-4056, 2020.
- B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, "Learning Deep Features for Discriminative Localization," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.