• Title/Summary/Keyword: word Embedding

Search Result 234, Processing Time 0.02 seconds

Comparative analysis of model performance for predicting the customer of cafeteria using unstructured data

  • Seungsik Kim;Nami Gu;Jeongin Moon;Keunwook Kim;Yeongeun Hwang;Kyeongjun Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.5
    • /
    • pp.485-499
    • /
    • 2023
  • This study aimed to predict the number of meals served in a group cafeteria using machine learning methodology. Features of the menu were created through the Word2Vec methodology and clustering, and a stacking ensemble model was constructed using Random Forest, Gradient Boosting, and CatBoost as sub-models. Results showed that CatBoost had the best performance with the ensemble model showing an 8% improvement in performance. The study also found that the date variable had the greatest influence on the number of diners in a cafeteria, followed by menu characteristics and other variables. The implications of the study include the potential for machine learning methodology to improve predictive performance and reduce food waste, as well as the removal of subjective elements in menu classification. Limitations of the research include limited data cases and a weak model structure when new menus or foreign words are not included in the learning data. Future studies should aim to address these limitations.

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Deep Learning Model for Disaster Alerts Classification

  • Park, Soonwook;Jun, Hyeyoon;Kim, Yoonsoo;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.1-9
    • /
    • 2021
  • Disaster alerts are text messages sent by government to people in the area in the event of a disaster. Since the number of disaster alerts has increased, the number of people who block disaster alerts is increasing as many unnecessary disaster alerts are being received. To solve this problem, this study proposes a deep learning model that automatically classifies disaster alerts by disaster type, and allows only necessary disaster alerts to be received according to the recipient. The proposed model embeds disaster alerts via KoBERT and classifies them by disaster type with LSTM. As a result of classifying disaster alerts using 3 combinations of parts of speech: [Noun], [Noun + Adjective + Verb] and [All parts], and 4 classification models: Proposed model, Keyword classification, Word2Vec + 1D-CNN and KoBERT + FFNN, the proposed model achieved the highest performance with 0.988954 accuracy.

Application of a Topic Model on the Korea Expressway Corporation's VOC Data (한국도로공사 VOC 데이터를 이용한 토픽 모형 적용 방안)

  • Kim, Ji Won;Park, Sang Min;Park, Sungho;Jeong, Harim;Yun, Ilsoo
    • Journal of Information Technology Services
    • /
    • v.19 no.6
    • /
    • pp.1-13
    • /
    • 2020
  • Recently, 80% of big data consists of unstructured text data. In particular, various types of documents are stored in the form of large-scale unstructured documents through social network services (SNS), blogs, news, etc., and the importance of unstructured data is highlighted. As the possibility of using unstructured data increases, various analysis techniques such as text mining have recently appeared. Therefore, in this study, topic modeling technique was applied to the Korea Highway Corporation's voice of customer (VOC) data that includes customer opinions and complaints. Currently, VOC data is divided into the business areas of Korea Expressway Corporation. However, the classified categories are often not accurate, and the ambiguous ones are classified as "other". Therefore, in order to use VOC data for efficient service improvement and the like, a more systematic and efficient classification method of VOC data is required. To this end, this study proposed two approaches, including method using only the latent dirichlet allocation (LDA), the most representative topic modeling technique, and a new method combining the LDA and the word embedding technique, Word2vec. As a result, it was confirmed that the categories of VOC data are relatively well classified when using the new method. Through these results, it is judged that it will be possible to derive the implications of the Korea Expressway Corporation and utilize it for service improvement.

Question Similarity Measurement of Chinese Crop Diseases and Insect Pests Based on Mixed Information Extraction

  • Zhou, Han;Guo, Xuchao;Liu, Chengqi;Tang, Zhan;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3991-4010
    • /
    • 2021
  • The Question Similarity Measurement of Chinese Crop Diseases and Insect Pests (QSM-CCD&IP) aims to judge the user's tendency to ask questions regarding input problems. The measurement is the basis of the Agricultural Knowledge Question and Answering (Q & A) system, information retrieval, and other tasks. However, the corpus and measurement methods available in this field have some deficiencies. In addition, error propagation may occur when the word boundary features and local context information are ignored when the general method embeds sentences. Hence, these factors make the task challenging. To solve the above problems and tackle the Question Similarity Measurement task in this work, a corpus on Chinese crop diseases and insect pests(CCDIP), which contains 13 categories, was established. Then, taking the CCDIP as the research object, this study proposes a Chinese agricultural text similarity matching model, namely, the AgrCQS. This model is based on mixed information extraction. Specifically, the hybrid embedding layer can enrich character information and improve the recognition ability of the model on the word boundary. The multi-scale local information can be extracted by multi-core convolutional neural network based on multi-weight (MM-CNN). The self-attention mechanism can enhance the fusion ability of the model on global information. In this research, the performance of the AgrCQS on the CCDIP is verified, and three benchmark datasets, namely, AFQMC, LCQMC, and BQ, are used. The accuracy rates are 93.92%, 74.42%, 86.35%, and 83.05%, respectively, which are higher than that of baseline systems without using any external knowledge. Additionally, the proposed method module can be extracted separately and applied to other models, thus providing reference for related research.

Korean Entity Recognition System using Bi-directional LSTM-CNN-CRF (Bi-directional LSTM-CNN-CRF를 이용한 한국어 개체명 인식 시스템)

  • Lee, Dong-Yub;Lim, Heui-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.327-329
    • /
    • 2017
  • 개체명 인식(Named Entity Recognition) 시스템은 문서에서 인명(PS), 지명(LC), 단체명(OG)과 같은 개체명을 가지는 단어나 어구를 해당 개체명으로 인식하는 시스템이다. 개체명 인식 시스템을 개발하기 위해 딥러닝 기반의 워드 임베딩(word embedding) 자질과 문장의 형태적 특징 및 기구축 사전(lexicon) 기반의 자질 구성 방법을 제안하고, bi-directional LSTM, CNN, CRF과 같은 모델을 이용하여 구성된 자질을 학습하는 방법을 제안한다. 실험 데이터는 2017 국어 정보시스템 경진대회에서 제공한 2016klpNER 데이터를 이용하였다. 실험은 전체 4258 문장 중 학습 데이터 3406 문장, 검증 데이터 426 문장, 테스트 데이터 426 문장으로 데이터를 나누어 실험을 진행하였다. 실험 결과 본 연구에서 제안하는 모델은 BIO 태깅 방식의 개체 청크 단위 성능 평가 결과 98.9%의 테스트 정확도(test accuracy)와 89.4%의 f1-score를 나타냈다.

  • PDF

Korean Phoneme Sequence based Word Embedding (한국어 음소열 기반 워드 임베딩 기술)

  • Chung, Euisok;Jeon, Hwa Jeon;Lee, Sung Joo;Park, Jeon-Gue
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.225-227
    • /
    • 2017
  • 본 논문은 한국어 서브워드 기반 워드 임베딩 기술을 다룬다. 미등록어 문제를 가진 기존 워드 임베딩 기술을 대체할 수 있는 새로운 워드 임베딩 기술을 한국어에 적용하기 위해, 음소열 기반 서브워드 자질 검증을 진행한다. 기존 서브워드 자질은 문자 n-gram을 사용한다. 한국어의 경우 특정 단음절 발음은 단어에 따라 달라진다. 여기서 음소열 n-gram은 특정 서브워드 자질의 변별력을 확보할 수 있다는 장점이 있다. 본 논문은 서브워드 임베딩 기술을 재구현하여, 영어 환경에서 기존 워드 임베딩 사례와 비교하여 성능 우위를 확보한다. 또한, 한국어 음소열 자질을 활용한 실험 결과에서 의미적으로 보다 유사한 어휘를 벡터 공간상에 근접시키는 결과를 보여 준다.

  • PDF

A typing error-robust Korean POS tagging using Hangul Jamo combination-based embedding (오타에 강건한 자모 조합 임베딩 기반 한국어 품사 태깅)

  • Seo, Dae-Ryong;Chung, Youjin;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.203-208
    • /
    • 2017
  • 본 논문은 한글 자모 조합 임베딩을 이용하여 오타에 강건한 한국어 품사 태깅 시스템을 구축하는 방법에 대해 기술한다. 최근 딥 러닝 연구가 활발히 진행되면서 자질을 직접 추출해야 하는 기존의 기계학습 방법이 아닌, 스스로 자질을 찾아서 학습하는 딥 러닝 모델을 이용한 연구가 늘어나고 있다. 본 논문에서는 다양한 딥 러닝 모델 중에서 sequence labeling에 강점을 갖고 있는 bidirectional LSTM CRFs 모델을 사용하였다. 한국어 품사 태깅 문제에서 일반적으로 사용되는 음절 임베딩은 약간의 오타에도 품사 태깅 성능이 크게 하락하는 한계가 있었다. 따라서 이를 개선하기 위해 본 논문에서는 한글 자모 임베딩 값을 조합시킨 음절 임베딩 방식을 제안하였다. 강제로 오타를 발생시킨 테스트 집합에서 실험한 결과, 자모 조합 임베딩 기법이 word2vec 음절 임베딩 방식에 비해 형태소 분할은 0.9%, 품사 태깅은 3.5% 우수한 성능을 기록하였다.

  • PDF

Query Expansion of Video Retrieval System using Domain Ontology and Word Embedding (도메인 온톨로지와 워드임베딩을 활용한 영상검색 시스템의 질의어 확장)

  • Hahm, Gyeong-June;Kwak, Chang-Uk;Kim, Sun-Joong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.245-247
    • /
    • 2017
  • 기존 콘텐츠를 활용 및 조합하여 새로운 콘텐츠를 제작하는 개인 사용자가 늘어나고 있는 추세다. 사용자가 원하는 내용이 담긴 영상 콘텐츠를 활용하기 위해서는 이를 지원하는 영상 검색 시스템이 필요하다. 하지만 기존의 영상 검색 시스템은 키워드 매칭을 기반으로 하고 있기 때문에 사용자가 원하는 영상을 찾지 못하는 경우가 많다. 본 연구에서는 이러한 문제를 해결하기 위해 사용자의 검색 의도를 보다 정확하게 표현할 수 있는 질의어 확장 방법을 제시하고 있다. 제시하는 방법은 도메인 온톨로지와 워드 임베딩 결과를 이용하여 질의어와 의미적으로 밀접히 관련된 단어를 추가하고 확장된 질의어를 이용하여 검색을 수행하게 된다. 이를 통해 사용자는 만족할만한 검색 결과를 얻을 수 있게 된다. 구현한 시스템을 이용하여 질의어가 확장되는 과정을 보임으로써 본 연구에서 제시하고 있는 방법에 대한 평가를 수행하였다.

  • PDF