Fig. 1. Pipeline process of dialog system
Fig. 2. Distribution of the sentence length (i.e., the number of tokens in sentences)
Fig. 3. CNN structure for domain classification
Fig. 4. Random Forest
Fig. 5. Precision of comparable models
Fig. 6. Recall of comparable models
Fig. 7. F1 score of comparable models
Table 1. Data samples used for experiments
Table 2. Data statistics
Table 3. Weighted performance of comparable models
참고문헌
- Amazon Alexa. https://developer.amazon.com/alexa
- Naver Clova. https://clova.ai/ko
- Samsung Bixby, https://www.samsung.com/sec/apps/bixby/
- Y. S. Jeong. (2018). Out-Of-Domain Detection Using Hierarchical Dirichlet Process. Journal of The Korea Society of Computer and Information, 23(1), 17-24. https://doi.org/10.9708/JKSCI.2018.23.01.017
- W. S. McCulloch & W. Pitts. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. The bulletin of mathematical biophysic, 5(4), 115-133. https://doi.org/10.1007/BF02478259
- A. Krizhevsky., I. Sutskever. & G. E. Hinton. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the 25th International Conference on Neural Information Processing Systems. (pp. 1097-1105).
- K. Simonyan. & Andrew Zisserman. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. Proceedings of 3rd International Conference on Learning Representations. (pp. 1-14).
- K. He., X. Zhang., S. Ren. & J. Sun. (2016). Deep Residual Learning for Image Recognition. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. (pp. 770-778).
- C. Szegedy., W. Liu., Y. Jia., P. Sermanet., S. Reed., D. Anguelov., D. Erhan., V. Vanhoucke. & A. Rabinovich. (2015). Going Deeper with Convolutions. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. (pp. 1-9).
- Y. Kim. (2014). Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. (pp. 1746-1751).
- H. Kim. & Y. S. Jeong. (2019). Sentiment Classification Using Convolutional Neural Networks. Applied Science, 9(11), 1-14.
- S. Baker., A. Korhonen. & S. Pyysalo. (2016). Cancer Hallmark Text Classification Using Convolutional Neural Networks. Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining. (pp. 1-9).
- S. Lai., L. Xu., K. Liu. & J. Zhao. (2015). Recurrent Convolutional Neural Networks for Text Classification. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. (pp. 2267-2273).
- A. Jacovi., O. S. Shalom. & Y. Goldberg. (2018). Understanding Convolutional Neural Networks for Text Classification, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. (pp. 56-65).
- L. Breiman. (2001). Random Forests. Machine Learning, 45(1), 5-32. https://doi.org/10.1023/A:1010933404324
- J. R. Quilan. (1986). Induction of Decision Trees. Machine Learning, 1(1), 81-106. https://doi.org/10.1007/BF00116251
- B. Xu., X. Guo., Y. Ye. & J. Cheng. (2012). An Improved Random Forest Classifier for Text Categorization. Journal of Computers, 7(12), 2913-2920.
- A. Bouaziz., C. Dartigues-Pallez., C. da C. Pereira., F. Precioso. & P. Lloret. (2014) Short Text Classification Using Semantic Random Forest. Proceedings of International Conference on Data Warehousing and Knowledge Discovery. (pp. 288-299).
- N. Srivastava., G. Hinton., A. Krizhevsky., I. Sutskever. & R. Salakhutdinov. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15, 1929-1958.
- X. Glorot. & Y. Bengio. (2010). Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. (pp. 249-256).
- D. P. Kingma. & J. L. Ba. (2015). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations. (pp. 1-15).