• Title/Summary/Keyword: perplexity

Search Result 36, Processing Time 0.027 seconds

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Sentence-Chain Based Seq2seq Model for Corpus Expansion

  • Chung, Euisok;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.455-466
    • /
    • 2017
  • This study focuses on a method for sequential data augmentation in order to alleviate data sparseness problems. Specifically, we present corpus expansion techniques for enhancing the coverage of a language model. Recent recurrent neural network studies show that a seq2seq model can be applied for addressing language generation issues; it has the ability to generate new sentences from given input sentences. We present a method of corpus expansion using a sentence-chain based seq2seq model. For training the seq2seq model, sentence chains are used as triples. The first two sentences in a triple are used for the encoder of the seq2seq model, while the last sentence becomes a target sequence for the decoder. Using only internal resources, evaluation results show an improvement of approximately 7.6% relative perplexity over a baseline language model of Korean text. Additionally, from a comparison with a previous study, the sentence chain approach reduces the size of the training data by 38.4% while generating 1.4-times the number of n-grams with superior performance for English text.

Automatic Generating Stopword Methods for Improving Topic Model (토픽모델의 성능 향상을 위한 불용어 자동 생성 기법)

  • Lee, Jung-Been;In, Hoh Peter
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.869-872
    • /
    • 2017
  • 정보검색(Information retrieval) 및 텍스트 분석을 위해 수집하는 비정형 데이터 즉, 자연어를 전처리하는 과정 중 하나인 불용어(Stopword) 제거는 모델의 품질을 높일 수 있는 쉽고, 효과적인 방법 중에 하나이다. 특히 다양한 텍스트 문서에 잠재된 주제를 추출하는 기법인 토픽모델링의 경우, 너무 오래되거나, 수집된 문서의 도메인이나 성격과 무관한 불용어의 제거로 인해, 해당 토픽 모델에서 학습되어 생성된 주제 관련 단어들의 일관성이 떨어지게 된다. 따라서 분석가가 분류된 주제를 올바르게 해석하는데 있어 많은 어려움이 따르게 된다. 본 논문에서는 이러한 문제점을 해결하기 위해 일반적으로 사용되는 표준 불용어 대신 관련 도메인 문서로부터 추출되는 점별 상호정보량(PMI: Pointwise Mutual Information)을 이용하여 불용어를 자동으로 생성해주는 기법을 제안한다. 생성된 불용어와 표준 불용어를 통해 토픽 모델의 품질을 혼잡도(Perplexity)로써 측정한 결과, 본 논문에서 제안한 기법으로 생성한 30개의 불용어가 421개의 표준 불용어보다 더 높은 모델 성능을 보였다.

A Corpus Selection Based Approach to Language Modeling for Large Vocabulary Continuous Speech Recognition (대용량 연속 음성 인식 시스템에서의 코퍼스 선별 방법에 의한 언어모델 설계)

  • Oh, Yoo-Rhee;Yoon, Jae-Sam;kim, Hong-Kook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.103-106
    • /
    • 2005
  • In this paper, we propose a language modeling approach to improve the performance of a large vocabulary continuous speech recognition system. The proposed approach is based on the active learning framework that helps to select a text corpus from a plenty amount of text data required for language modeling. The perplexity is used as a measure for the corpus selection in the active learning. From the recognition experiments on the task of continuous Korean speech, the speech recognition system employing the language model by the proposed language modeling approach reduces the word error rate by about 6.6 % with less computational complexity than that using a language model constructed with randomly selected texts.

  • PDF

Analyzing Customer Experience in Hotel Services Using Topic Modeling

  • Nguyen, Van-Ho;Ho, Thanh
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.586-598
    • /
    • 2021
  • Nowadays, users' reviews and feedback on e-commerce sites stored in text create a huge source of information for analyzing customers' experience with goods and services provided by a business. In other words, collecting and analyzing this information is necessary to better understand customer needs. In this study, we first collected a corpus with 99,322 customers' comments and opinions in English. From this corpus we chose the best number of topics (K) using Perplexity and Coherence Score measurements as the input parameters for the model. Finally, we conducted an experiment using the latent Dirichlet allocation (LDA) topic model with K coefficients to explore the topic. The model results found hidden topics and keyword sets with high probability that are interesting to users. The application of empirical results from the model will support decision-making to help businesses improve products and services as well as business management and development in the field of hotel services.

HeavyRoBERTa: Pretrained Language Model for Heavy Industry (HeavyRoBERTa: 중공업 특화 사전 학습 언어 모델)

  • Lee, Jeong-Doo;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.602-604
    • /
    • 2021
  • 최근 자연어 처리 분야에서 사전 학습된 언어 모델은 다양한 응용 태스크에 적용되어 성능을 향상시켰다. 하지만 일반적인 말뭉치로 사전 학습된 언어 모델의 경우 중공업 분야처럼 전문적인 분야의 응용 태스크에서 좋은 성능을 나타내지 못한다. 때문에 본 논문에서는 이러한 문제점을 해결하기 위해 중공업 말뭉치를 이용한 RoBERTa 기반의 중공업 분야에 특화된 언어 모델 HeavyRoBERTa를 제안하고 이를 통해 중공업 말뭉치 상에서 Perplexity와 zero-shot 유의어 추출 태스크에서 성능을 개선시켰다.

  • PDF

Zero-Shot Fact Verification using Language Models Perplexities of Evidence and Claim (증거와 Claim의 LM Perplexity를 이용한 Zero-shot 사실 검증)

  • Park, Eunhwan;Na, Seung-Hoon;Shin, Dongwook;Jeon, Donghyeon;Kang, Inho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.524-527
    • /
    • 2021
  • 최근 국외에서 사실 검증 연구가 활발하게 이루어지고 있지만 한국어의 경우 데이터 집합의 부재로 인하여 사실 검증 연구가 이루어지는데 큰 어려움을 겪고 있다. 이러한 어려움을 해소하고자 자동 생성 모델을 통하여 데이터 집합을 생성하는 시도도 있으나 생성 모델의 특성 상 부정확한 데이터가 생성되어 사실 검증 연구의 퀄리티를 떨어뜨린다는 문제점이 있다. 이러한 문제점을 해소하기 위해 수동으로 구축한 100건의 데이터 집합으로 최근에 이루어진 퓨-샷(Few-Shot) 사실 검증을 확장한 학습이 필요없는 제로-샷(Zero-Shot) 질의 응답에 대한 사실 검증 연구를 제안한다.

  • PDF

Wanda Pruning for Lightweighting Korean Language Model (Wanda Pruning에 기반한 한국어 언어 모델 경량화)

  • Jun-Ho Yoon;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.437-442
    • /
    • 2023
  • 최근에 등장한 대규모 언어 모델은 다양한 언어 처리 작업에서 놀라운 성능을 발휘하고 있다. 그러나 이러한 모델의 크기와 복잡성 때문에 모델 경량화의 필요성이 대두되고 있다. Pruning은 이러한 경량화 전략 중 하나로, 모델의 가중치나 연결의 일부를 제거하여 크기를 줄이면서도 동시에 성능을 최적화하는 방법을 제시한다. 본 논문에서는 한국어 언어 모델인 Polyglot-Ko에 Wanda[1] 기법을 적용하여 Pruning 작업을 수행하였다. 그리고 이를 통해 가중치가 제거된 모델의 Perplexity, Zero-shot 성능, 그리고 Fine-tuning 후의 성능을 분석하였다. 실험 결과, Wanda-50%, 4:8 Sparsity 패턴, 2:4 Sparsity 패턴의 순서로 높은 성능을 나타냈으며, 특히 일부 조건에서는 기존의 Dense 모델보다 더 뛰어난 성능을 보였다. 이러한 결과는 오늘날 대규모 언어 모델 중심의 연구에서 Pruning 기법의 효과와 그 중요성을 재확인하는 계기가 되었다.

  • PDF

In-Context Retrieval-Augmented Korean Language Model (In-Context 검색 증강형 한국어 언어 모델)

  • Sung-Min Lee;Joung Lee;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.443-447
    • /
    • 2023
  • 검색 증강형 언어 모델은 입력과 연관된 문서들을 검색하고 텍스트 생성 과정에 통합하여 언어 모델의 생성 능력을 강화한다. 본 논문에서는 사전 학습된 대규모 언어 모델의 추가적인 학습 없이 In-Context 검색 증강으로 한국어 언어 모델의 생성 능력을 강화하고 기존 언어 모델 대비 성능이 증가함을 보인다. 특히 다양한 크기의 사전 학습된 언어 모델을 활용하여 검색 증강 결과를 보여 모든 규모의 사전 학습 모델에서 Perplexity가 크게 개선된 결과를 확인하였다. 또한 오픈 도메인 질의응답(Open-Domain Question Answering) 과업에서도 EM-19, F1-27.8 향상된 결과를 보여 In-Context 검색 증강형 언어 모델의 성능을 입증한다.

  • PDF

The Analysis of Changes in East Coast Tourism using Topic Modeling (토핑 모델링을 활용한 동해안 관광의 변화 분석)

  • Jeong, Eun-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • The amount of data is increasing through various IT devices in a hyper-connected society where the 4th revolution is progressing, and new value can be created by analyzing that data. This paper was collected total 1,526 articles from 2017 to 2019 in central magazines, economic magazines, regional associations, and major broadcasting companies with the keyword "(East Coast Tourism or East Coast Travel) and Gangwon-do" through Bigkinds. It was performed the topic modeling using LDA algorithm implemented in the R language to analyze the collected 1,526 articles. It was extracted keywords for each year from 2017 to 2019, and classified and compared keywords with high frequency for each year. It was setted the optimal number of topics to 8 using Log Likelihood and Perplexity, and then inferred 8 topics using the Gibbs Sampling method. The inferred topics were Gangneung and Beach, Goseong and Mt.Geumgang, KTX and Donghae-Bukbu line, weekend sea tour, Sokcho and Unification Observatory, Yangyang and Surfing, experience tour, and transportation network infra. The changes of articles on East coast tourism was was analyzed using the proportion of the inferred eight topics. As the result, the proportion of Unification Observatory and Mt. Geumgang showed no significant change, the proportion of KTX and experience tour increased, and the proportion of other topics decreased in 2018 compared to 2017. In 2019, the proportion of KTX and experience tour decreased, but the proportion of other topics showed no significant change.