Acknowledgement
The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding their research work through the project number 20-UQU-IF-P3-001.
References
- B. Athiwaratkun, A. G. Wilson, and A. Anandkumar, "Probabilistic fasttext for multi-sense word embeddings," arXiv Prepr. arXiv1806.02901, 2018.
- A. B. Soliman, K. Eissa, and S. R. El-Beltagy, "Aravec: A set of arabic word embedding models for use in arabic nlp," Procedia Comput. Sci., vol. 117, pp. 256-265, 2017. https://doi.org/10.1016/j.procs.2017.10.117
- W. Antoun, F. Baly, and H. Hajj, "Arabert: Transformer-based model for arabic language understanding," arXiv Prepr. arXiv2003.00104, 2020.
- W. Antoun, F. Baly, and H. Hajj, "AraGPT2: Pre-Trained Transformer for Arabic Language Generation," Dec. 2020, Accessed: Jul. 05, 2021. [Online]. Available: http://arxiv.org/abs/2012.15520.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv Prepr. arXiv1810.04805, 2018.
- A. Vaswani, "Attention Is All You Need," no. Nips, 2017.
- J. Howard and S. Ruder, "Universal language model fine-tuning for text classification," arXiv Prepr. arXiv1801.06146, 2018.
- M. Djandji, F. Baly, H. Hajj, and others, "Multi-Task Learning using AraBert for Offensive Language Detection," in Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, 2020, pp. 97-101.
- A. M. Abu Nada, E. Alajrami, A. A. Al-Saqqa, and S. S. Abu-Naser, "Arabic Text Summarization Using AraBERT Model Using Extractive Text Summarization Approach," 2020.
- A. Al Sallab, M. Rashwan, H. Raafat, and A. Rafea, "Automatic Arabic diacritics restoration based on deep nets," in Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), 2014, pp. 65-72.
- A. Al-sallab, R. Baly, H. Hajj, K. B. Shaban, W. Elhajj, and G. Badaro, "AROMA : A Recursive Deep Learning Model for Opinion Mining in Arabic as a Low Resource Language," vol. 16, no. 4, 2017.
- A. Magooda et al., "RDI-Team at SemEval-2016 task 3: RDI unsupervised framework for text ranking," 2016.
- N. A. P. Rostam and N. H. A. H. Malim, "Text categorisation in Quran and Hadith: Overcoming the interrelation challenges using machine learning and term weighting," J. King Saud Univ. - Comput. Inf. Sci., vol. 33, no. 6, pp. 658-667, Jul. 2019, doi: 10.1016/j.jksuci.2019.03.007.
- M. E. Peters et al., "Deep contextualized word representations," arXiv Prepr. arXiv1802.05365, 2018.
- D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv Prepr. arXiv1409.0473, 2014.
- T. B. Brown et al., "Language models are few-shot learners," arXiv Prepr. arXiv2005.14165, 2020.
- S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Comput., vol. 9, no. 8, pp. 1735-1780, 1997. https://doi.org/10.1162/neco.1997.9.8.1735
- J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Gated feedback recurrent neural networks," arXiv Prepr. arXiv1502.02367, 2015.