Acknowledgement
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2018R1C1B3008159).
References
- Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet classification with deep convolutional neural networks. Communications Of The ACM. 60, 84-90 (2017) https://doi.org/10.1145/3065386
- Xin, M. & Wang, Y. Research on image classification model based on deep convolution neural network. EURASIP Journal On Image And Video Processing. 2019, 1-11 (2019) https://doi.org/10.1186/s13640-018-0395-2
- Ali, M., Iqbal, T., Lee, K., Muqeet, A., Lee, S., Kim, L. & Bae, S. ERDNN: Error-Resilient Deep Neural Networks With a New Error Correction Layer and Piece-Wise Rectified Linear Unit. IEEE Access. 8 pp. 158702-158711 (2020) https://doi.org/10.1109/access.2020.3017211
- Ali, M. & Bae, S. Soft Error Adaptable Deep Neural Networks. Proceedings Of The Korean Society Of Broadcast Engineers Conference. pp. 241-243 (2020)
- Khan, L., Saad, W., Han, Z., Hossain, E. & Hong, C. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Communications Surveys & Tutorials. (2021)
- Jo, T. NTC (Neural Text Categorizer): neural network for text categorization. International Journal Of Information Studies. 2, 83-96 (2010)
- Kowsari, K., Jafari Meimandi, K., Heidarysafa, M., Mendu, S., Barnes, L. & Brown, D. Text classification algorithms: A survey. Information. 10, 150 (2019) https://doi.org/10.3390/info10040150
- Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Sun, L., Yu, P. & He, L. A survey on text classifica- tion: From shallow to deep learning. ArXiv Preprint ArXiv:2008.00364. (2020)
- Ullah, I., Khan, S., Imran, M. & Lee, Y. RweetMiner: Automatic identification and categorization of help requests on twitter during disasters. Expert Systems With Applications. 176 pp. 114787 (2021) https://doi.org/10.1016/j.eswa.2021.114787
- Sinha, H., Awasthi, V. & Ajmera, P. Audio classification using braided convolutional neural networks. IET Signal Processing. 14, 448-454 (2020) https://doi.org/10.1049/iet-spr.2019.0381
- Park, J., Kumar, T. & Bae, S. Search of an Optimal Sound Augmentation Policy for Environ- mental Sound Classification with Deep Neural Networks. Proceedings Of The Korean Society Of Broadcast Engineers Conference. pp. 18-21 (2020)
- Kumar, T., Park, J. & Bae, S. Intra-Class Random Erasing (ICRE) augmentation for audio classification. Proceedings Of The Korean Society Of Broadcast Engineers Conference. pp. 244-247 (2020)
- Bank, D., Koenigstein, N. & Giryes, R. Autoencoders. ArXiv Preprint ArXiv:2003.05991. (2020)
- Zeng, K., Yu, J., Wang, R., Li, C. & Tao, D. Coupled deep autoencoder for single image super-resolution. IEEE Transactions On Cybernetics. 47, 27-37 (2015) https://doi.org/10.1109/TCYB.2015.2501373
- Zhai, X., Oliver, A., Kolesnikov, A. & Beyer, L. S4l: Self-supervised semi-supervised learning. Proceedings Of The IEEE/CVF International Conference On Computer Vision. pp. 1476-1485 (2019)
- Ebert, S. Semi-supervised learning for image classification. (2012)
- Luo, Y., Zhu, J., Li, M., Ren, Y. & Zhang, B. Smooth neighbors on teacher graphs for semi-supervised learning. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 8896-8905 (2018)
- Sohn, K., Berthelot, D., Li, C., Zhang, Z., Carlini, N., Cubuk, E., Kurakin, A., Zhang, H. & Raffel, C. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. ArXiv Preprint ArXiv:2001.07685. (2020)
- Shorten, C. & Khoshgoftaar, T. A survey on image data augmentation for deep learning. Journal Of Big Data. 6, 1-48 (2019) https://doi.org/10.1186/s40537-018-0162-3
- Wang, X., Wang, K. & Lian, S. A survey on face data augmentation for the training of deep neural networks. Neural Computing And Applications. pp. 1-29 (2020)
- Sinha, R., Lee, S., Rim, M. & Hwang, S. Data augmentation schemes for deep learning in an indoor positioning application. Electronics. 8, 554 (2019) https://doi.org/10.3390/electronics8050554
- Luo, C., Zhu, Y., Jin, L. & Wang, Y. Learn to augment: Joint data augmentation and network optimization for text recognition. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition. pp. 13746-13755 (2020)
- Gontijo-Lopes, R., Smullin, S., Cubuk, E. & Dyer, E. Tradeoffs in Data Augmentation: An Empirical Study. International Conference On Learning Representations. (2020)
- Jorge, J., Vieco, J., Paredes, R., Sanchez, J. & Benedi, J. Empirical Evaluation of Variational Autoencoders for Data Augmentation. VISIGRAPP (5: VISAPP). pp. 96-104 (2018)
- Elbattah, M., Loughnane, C., Guerin, J., Carette, R., Cilia, F. & Dequen, G. Variational Au-toencoder for Image-Based Augmentation of Eye-Tracking Data. Journal Of Imaging. 7, 83 (2021) https://doi.org/10.3390/jimaging7050083
- Kimura, A., Ghahramani, Z., Takeuchi, K., Iwata, T. & Ueda, N. Few-shot learning of neural networks from scratch by pseudo example optimization. British Machine Vision Conference 2018, BMVC 2018. (2019)
- Felix Berkhahn & Dominik Geissler , R. AUGMENTING VARIATIONAL AUTOEN-CODERS WITH SPARSE LABELS: A UNIFIED FRAMEWORK FOR UNSUPERVISED, SEMI-(UN)SUPERVISED, AND SUPERVISED LEARNING. ArXiv E-prints. (2019,11)
- Tachibana, R., Matsubara, T. & Uehara, K. Semi-supervised learning using adversarial net-works. 2016 IEEE/ACIS 15th International Conference On Computer And Information Science (ICIS). pp. 1-6 (2016)
- Haiyan, W., Haomin, Y., Xueming, L. & Haijun, R. Semi-supervised autoencoder: A joint approach of representation and classification. 2015 International Conference On Computational Intelligence And Communication Networks (CICN). pp. 1424-1430 (2015)
- Arazo, E., Ortego, D., Albert, P., O'Connor, N. & McGuinness, K. Pseudo-labeling and confirmation bias in deep semi-supervised learning. 2020 International Joint Conference On Neural Networks (IJCNN). pp. 1-8 (2020)
- Wang, Y., Yao, Q., Kwok, J. & Ni, L. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR), Vol.53, No.63, pp.1-34, May 2021. https://doi.org/10.1145/3386252
- Hariharan, B. & Girshick, R. Low-shot visual recognition by shrinking and hallucinating features. Proceedings Of The IEEE International Conference On Computer Vision. pp. 3018-3027 (2017)
- Zeiler, M. Adadelta: an adaptive learning rate method. ArXiv Preprint ArXiv:1212.5701.(2012)
- Jiang, S., Zhu, Y., Liu, C., Song, X., Li, X. & Min, W. Dataset Bias in Few-shot Image Recognition. ArXiv Preprint ArXiv:2008.07960. (2020)
- Arip Asadulaev, A. Interpretable Few-Shot Learning via Linear Distillation. ArXiv E-prints.(2019,10)
- Li, W., Wang, Z., Li, J., Polson, J., Speier, W. & Arnold, C. Semi-supervised learning based on generative adversarial network: a comparison between good GAN and bad GAN approach. ArXiv Preprint ArXiv:1905.06484. (2019)
- Kingma, D., Mohamed, S., Rezende, D. & Welling, M. Semi-supervised learning with deep generative models. Advances In Neural Information Processing Systems. pp. 3581-3589 (2014)
- Weston, J., Ratle, F., Mobahi, H. & Collobert, R. Deep learning via semi-supervised embedding. Neural Networks: Tricks Of The Trade. pp. 639-655 (2012)
- Li, Y., Pan, Q., Wang, S., Peng, H., Yang, T. & Cambria, E. Disentangled variational auto- encoder for semi-supervised learning. Information Sciences. 482 pp. 73-85 (2019) https://doi.org/10.1016/j.ins.2018.12.057
- Tachibana, R., Matsubara, T. & Uehara, K. Semi-supervised learning using adversarial networks. 2016 IEEE/ACIS 15th International Conference On Computer And Information Science (ICIS). pp. 1-6 (2016)
- Lee, D. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. Workshop On Challenges In Representation Learning, ICML. 3 pp. 2 (2013)
- He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings Of The IEEE International Conference On Computer Vision. pp. 1026-1034 (2015)
- Robbins, H. & Monro, S. A stochastic approximation method. The Annals Of Mathematical Statistics. pp. 400-407 (1951)
- Hussain, S. Resources for Urdu language processing. Proceedings Of The 6th Workshop On Asian Language Resources. (2008)
- Goodfellow, I., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples.ArXiv Preprint ArXiv:1412.6572. (2014)
- Rauber, J., Brendel, W. & Bethge, M. Foolbox: A python toolbox to benchmark the robust- ness of machine learning models. ArXiv Preprint ArXiv:1707.04131. (2017)
- Plotz, T. & Fink, G. Markov models for offline handwriting recognition: a survey. International Journal On Document Analysis And Recognition (IJDAR). 12, 269 (2009) https://doi.org/10.1007/s10032-009-0098-4
- Lee, C. & Leedham, C. A new hybrid approach to handwritten address verification. International Journal Of Computer Vision. 57, 107-120 (2004) https://doi.org/10.1023/b:visi.0000013085.47268.e8
- Ul-Hasan, A., Ahmed, S., Rashid, F., Shafait, F. & Breuel, T. Offline printed Urdu Nastaleeq script recognition with bidirectional LSTM networks. 2013 12th International Conference On Document Analysis And Recognition. pp. 1061-1065 (2013)
- LeCun, Y. The MNIST database of handwritten digits. Http://yann.Lecun.Com/exdb/mnist/. (1998)
- Xiao, H., Rasul, K. & Vollgraf, R. FashionMNIST: a novel image dataset for benchmarking machine learning algorithms. ArXiv Preprint ArXiv:1708.07747. (2017)
- Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A. & Carin, L. Variational autoencoder for deep learning of images, labels and captions. Advances In Neural Information Processing Systems. pp. 2352-2360 (2016)