• Title/Summary/Keyword: LSTM language model

Search Result 82, Processing Time 0.025 seconds

Sentiment Analysis for Korean Product Review Using Stacked Bi-LSTM-CRF Model (Stacked Bi-LSTM-CRF 모델을 이용한 한국어 상품평 감성 분석)

  • Youn, Jun Young;Park, Jung Ju;Kim, Do Won;Min, Tae Hong;Lee, Jae Sung
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.633-635
    • /
    • 2018
  • 최근 소셜 커머스 데이터를 이용하여 상품에 대한 소비자들의 수요와 선호도 등을 조사하는 등의 감성분석 연구가 활발히 진행되고 있다. 본 연구에서는 Stacked Bi-LSTM-CRF 모델을 이용하여 한국어의 복합적인 형태로 이루어지는 감성표현에 대하여 어휘단위로 감성분석을 진행하고, 상품의 세부주제(특징, 관심키워드 등)를 추출하여 세부주제별 감성 분석을 할 수 있는 방법을 제안한다.

  • PDF

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

A Fuzzy-AHP-based Movie Recommendation System using the GRU Language Model (GRU 언어 모델을 이용한 Fuzzy-AHP 기반 영화 추천 시스템)

  • Oh, Jae-Taek;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.319-325
    • /
    • 2021
  • With the advancement of wireless technology and the rapid growth of the infrastructure of mobile communication technology, systems applying AI-based platforms are drawing attention from users. In particular, the system that understands users' tastes and interests and recommends preferred items is applied to advanced e-commerce customized services and smart homes. However, there is a problem that these recommendation systems are difficult to reflect in real time the preferences of various users for tastes and interests. In this research, we propose a Fuzzy-AHP-based movies recommendation system using the Gated Recurrent Unit (GRU) language model to address a problem. In this system, we apply Fuzzy-AHP to reflect users' tastes or interests in real time. We also apply GRU language model-based models to analyze the public interest and the content of the film to recommend movies similar to the user's preferred factors. To validate the performance of this recommendation system, we measured the suitability of the learning model using scraping data used in the learning module, and measured the rate of learning performance by comparing the Long Short-Term Memory (LSTM) language model with the learning time per epoch. The results show that the average cross-validation index of the learning model in this work is suitable at 94.8% and that the learning performance rate outperforms the LSTM language model.

Swearword Detection Method Considering Meaning of Words and Sentences (단어와 문장의 의미를 고려한 비속어 판별 방법)

  • Yi, Moung Ho;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.98-106
    • /
    • 2020
  • Currently, as Internet users increase, the use of swearword is indiscriminately increasing. As a result, cyber violence among teenagers is increasing very seriously, and among them, cyber-language violence is the most serious. In order to eradicate cyber-language violence, research on detection of swearword has been conducted, but the method of detecting swearword by looking at the meaning of words and the flow of context is insufficient. Therefore,in this paper,we propose a method of detecting swearword using FastText model and LSTM model so that deliberately modified swearword and standard language can be accurately detected by looking at the flow of context.

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

Design of LSTM-based Model for Extracting Relative Temporal Relations for Korean Texts (한국어 상대시간관계 추출을 위한 LSTM 기반 모델 설계)

  • Lim, Chae-Gyun;Jeong, Young-Seob;Lee, Young Jun;Oh, Kyo-Joong;Choi, Ho-Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.301-304
    • /
    • 2017
  • 시간정보추출 연구는 자연어 문장으로부터 대화의 문맥과 상황을 파악하고 사용자의 의도에 적합한 서비스를 제공하는데 중요한 역할을 하지만, 한국어의 고유한 언어적 특성으로 인해 한국어 텍스트에서는 개체간의 시간관계를 정확하게 인식하기 어려운 경향이 있다. 특히, 시간표현이나 사건에 대한 상대적인 시간관계는 시간 문맥을 체계적으로 파악하기 위해 중요한 개념이다. 본 논문에서는 한국어 자연어 문장에서 상대적인 시간표현과 사건 간의 관계를 추출하기 위한 LSTM(long short-term memory) 기반의 상대시간관계 추출 모델을 제안한다. 시간정보추출 연구에는 TIMEX3, EVENT, TLINK 추출의 세 가지 과정이 포함되지만, 본 논문에서는 특정 문장에 대해서 이미 추출된 TIMEX3 및 EVENT 개체를 제공하고 상대시간관계 TLINK를 추출하는 것만을 목표로 한다. 또한, 사람이 직접 태깅한 한국어 시간정보 주석 말뭉치를 대상으로 LSTM 기반 제안모델들의 상대적 시간관계 추출 성능을 비교한다.

  • PDF

Development of Block-based Code Generation and Recommendation Model Using Natural Language Processing Model (자연어 처리 모델을 활용한 블록 코드 생성 및 추천 모델 개발)

  • Jeon, In-seong;Song, Ki-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.3
    • /
    • pp.197-207
    • /
    • 2022
  • In this paper, we develop a machine learning based block code generation and recommendation model for the purpose of reducing cognitive load of learners during coding education that learns the learner's block that has been made in the block programming environment using natural processing model and fine-tuning and then generates and recommends the selectable blocks for the next step. To develop the model, the training dataset was produced by pre-processing 50 block codes that were on the popular block programming language web site 'Entry'. Also, after dividing the pre-processed blocks into training dataset, verification dataset and test dataset, we developed a model that generates block codes based on LSTM, Seq2Seq, and GPT-2 model. In the results of the performance evaluation of the developed model, GPT-2 showed a higher performance than the LSTM and Seq2Seq model in the BLEU and ROUGE scores which measure sentence similarity. The data results generated through the GPT-2 model, show that the performance was relatively similar in the BLEU and ROUGE scores except for the case where the number of blocks was 1 or 17.

Research on Methods to Increase Recognition Rate of Korean Sign Language using Deep Learning

  • So-Young Kwon;Yong-Hwan Lee
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.3-11
    • /
    • 2024
  • Deaf people who use sign language as their first language sometimes have difficulty communicating because they do not know spoken Korean. Deaf people are also members of society, so we must support to create a society where everyone can live together. In this paper, we present a method to increase the recognition rate of Korean sign language using a CNN model. When the original image was used as input to the CNN model, the accuracy was 0.96, and when the image corresponding to the skin area in the YCbCr color space was used as input, the accuracy was 0.72. It was confirmed that inserting the original image itself would lead to better results. In other studies, the accuracy of the combined Conv1d and LSTM model was 0.92, and the accuracy of the AlexNet model was 0.92. The CNN model proposed in this paper is 0.96 and is proven to be helpful in recognizing Korean sign language.

  • PDF

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

Deep learning model that considers the long-term dependency of natural language (자연 언어의 장기 의존성을 고려한 심층 학습 모델)

  • Park, Chan-Yong;Choi, Ho-Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.281-284
    • /
    • 2018
  • 본 논문에서는 machine reading 분야에서 기존의 long short-term memory (LSTM) 모델이 가지는 문제점을 해결하는 새로운 네트워크를 제안하고자 한다. 기존의 LSTM 모델은 크게 두가지 제한점을 가지는데, 그 중 첫째는 forget gate로 인해 잊혀진 중요한 문맥 정보들이 복원될 수 있는 방법이 없다는 것이다. 자연어에서 과거의 문맥 정보에 따라 현재의 단어의 의미가 크게 좌지우지될 수 있으므로 올바른 문장의 이해를 위해 필요한 과거 문맥의 정보 유지는 필수적이다. 또 다른 문제는 자연어는 그 자체로 단어들 간의 복잡한 구조를 통해 문장이 이루어지는 반면 기존의 시계열 모델들은 단어들 간의 관계를 추론할 수 있는 직접적인 방법을 가지고 있지 않다는 것이다. 본 논문에서는 최근 딥 러닝 분야에서 널리 쓰이는 attention mechanism과 본 논문이 제안하는 restore gate를 결합한 네트워크를 통해 상기 문제를 해결하고자 한다. 본 논문의 실험에서는 기존의 다른 시계열 모델들과 비교를 통해 제안한 모델의 우수성을 확인하였다.

  • PDF