• Title/Summary/Keyword: perplexity

Search Result 36, Processing Time 0.028 seconds

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

A Language Model based on VCCV of Sentence Speech Recognition (문장 음성 인식을 위한 VCCV기반의 언어 모델)

  • 박선희;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2419-2422
    • /
    • 2003
  • To improve performance of sentence speech recognition systems, we need to consider perplexity of language model and the number of words of dictionary for increasing vocabulary size. In this paper, we propose a language model of VCCV units for sentence speech recognition. For this, we choose VCCV units as a processing units of language model and compare it with clauses and morphemes. Clauses and morphemes have many vocabulary and high perplexity. But VCCV units have small lexicon size and limited vocabulary. An advantage of VCCV units is low perplexity. This paper made language model using bigram about given text. We calculated perplexity of each language processing unit. The perplexity of VCCV units is lower than morpheme and clause.

  • PDF

Efficient Language Model based on VCCV unit for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 효율적인 언어모델)

  • Park, Seon-Hui;No, Yong-Wan;Hong, Gwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.836-839
    • /
    • 2003
  • In this paper, we implement a language model by a bigram and evaluate proper smoothing technique for unit of low perplexity. Word, morpheme, clause units are widely used as a language processing unit of the language model. We propose VCCV units which have more small vocabulary than morpheme and clauses units. We compare the VCCV units with the clause and the morpheme units using the perplexity. The most common metric for evaluating a language model is the probability that the model assigns the derivative measures of perplexity. Smoothing used to estimate probabilities when there are insufficient data to estimate probabilities accurately. In this paper, we constructed the N-grams of the VCCV units with low perplexity and tested the language model using Katz, Witten-Bell, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

  • PDF

Language Model based on VCCV and Test of Smoothing Techniques for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 언어모델과 Smoothing 기법 평가)

  • Park, Seon-Hee;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.241-246
    • /
    • 2004
  • In this paper, we propose VCCV units as a processing unit of language model and compare them with clauses and morphemes of existing processing units. Clauses and morphemes have many vocabulary and high perplexity. But VCCV units have low perplexity because of the small lexicon and the limited vocabulary. The construction of language models needs an issue of the smoothing. The smoothing technique used to better estimate probabilities when there is an insufficient data to estimate probabilities accurately. This paper made a language model of morphemes, clauses and VCCV units and calculated their perplexity. The perplexity of VCCV units is lower than morphemes and clauses units. We constructed the N-grams of VCCV units with low perplexity and tested the language model using Katz, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

Zero-shot Lexical Semantics based on Perplexity of Pretrained Language Models (사전학습 언어모델의 Perplexity에 기반한 Zero-shot 어휘 의미 모델)

  • Choi, Heyong-Jun;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.473-475
    • /
    • 2021
  • 유의어 추천을 구현하기 위해서는 각 단어 사이의 유사도를 계산하는 것이 필수적이다. 하지만, 기존의 단어간 유사도를 계산하는 여러 방법들은 데이터셋에 등장하지 않은 단어에 대해 유사도를 계산 할 수 없다. 이 논문에서는 이를 해결하기 위해 언어모델의 PPL을 활용하여 단어간 유사도를 계산하였고, 이를 통해 유의어를 추천했을 때 MRR 41.31%의 성능을 확인했다.

  • PDF

BERT-based Two-Stage Classification Models for Alzheimer's Disease and Schizophrenia Diagnosis (BERT 기반 2단계 분류 모델을 이용한 알츠하이머병 치매와 조현병 진단)

  • Jung, Min-Kyo;Na, Seung-Hoon;Kim, Ko Woon;Shin, Byong-Soo;Chung, Young-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.558-563
    • /
    • 2021
  • 알츠하이머병 치매와 조현병 진단을 위한 2단계 분류 모델을 제안한다. 정상군과 환자군의 발화에 나타난 페어 언어 모델 간의 Perplexity 차이에 기반한 분류와 기존 단일 BERT 모델의 미세조정(fine-tuning)을 이용한 분류의 통합을 시도하였다. Perplexity 기반의 분류 성능이 알츠하이머병, 조현병 모두 우수한 결과를 보임을 확인 하였고, 조현병 분류 모델의 성능이 소폭 증가하였다. 향후 설명 가능한 인공지능 기법을 적용에 따른 성능 향상을 기대할 수 있었다.

  • PDF

Falling Accidents Analysis in Construction Sites by Using Topic Modeling (토픽 모델링을 이용한 건설현장 추락재해 분석)

  • Ryu, Hanguk
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.7
    • /
    • pp.175-182
    • /
    • 2019
  • We classify topics on fall incidents occurring in construction sites using topic modeling among machine learning techniques and analyze the causes of the accidents according to each topic. In order to apply topic modeling based on latent dirichlet allocation, text data was preprocessed and evaluated with Perplexity score to improve the reliability of the model. The most common falling accidents happened to the daily workers belonging to small construction site. Most of the causes were not operated properly due to lack of safety equipment, inadequacy of arrangement and wearing, and low performance of safety equipment. In order to prevent and reduce the falling accidents, it is important to educate the daily workers of small construction site, arrange the workplace, and check the wearing of personal safety equipment and device.

Sentence Compression based on Sentence Scoring Reflecting Linguistic Information (언어 정보를 반영한 문장 점수 측정 기반의 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.389-392
    • /
    • 2021
  • 문장 압축은 원본 문장의 중요한 의미를 보존하는 짧은 길이의 압축 문장을 생성하는 자연어처리 태스크이다. 문장 압축은 사용자가 텍스트로부터 필요한 정보를 빠르게 획득할 수 있도록 도울 수 있어 활발히 연구되고 있지만, 기존 연구들은 사람이 직접 정의한 압축 규칙이 필요하거나, 모델 학습을 위해 대량의 데이터셋이 필요하다는 문제점이 존재한다. 사전 학습된 언어 모델을 통한 perplexity 기반의 문장 점수 측정을 통해 문장을 압축하여 압축 규칙과 모델 학습을 위한 데이터셋이 필요하지 않은 연구 또한 존재하지만, 문장 점수 측정에 문장에 속한 단어들의 의미적 중요도를 반영하지 못하여 중요한 단어가 삭제되는 문제점이 존재한다. 본 논문은 언어 정보 중 품사 정보, 의존관계 정보, 개체명 정보의 중요도를 수치화하여 perplexity 기반의 문장 점수 측정에 반영하는 방법을 제안한다. 또한 제안한 문장 점수 측정 방법을 활용하였을 때 문장 점수 측정 기반 문장 압축 모델의 문장 압축 성능이 향상됨을 확인하였으며, 이를 통해 문장에 속한 단어의 언어 정보를 문장 점수 측정에 반영하는 것이 의미적으로 적절한 압축 문장을 생성하는 데 도움이 될 수 있음을 보였다.

Topic Modeling on Research Trends of Industry 4.0 Using Text Mining (텍스트 마이닝을 이용한 4차 산업 연구 동향 토픽 모델링)

  • Cho, Kyoung Won;Woo, Young Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.7
    • /
    • pp.764-770
    • /
    • 2019
  • In this research, text mining techniques were used to analyze the papers related to the "4th Industry". In order to analyze the papers, total of 685 papers were collected by searching with the keyword "4th industry" in Korea Journal Index(KCI) from 2016 to 2019. We used Python-based web scraping program to collect papers and use topic modeling techniques based on LDA algorithm implemented in R language for data analysis. As a result of perplexity analysis on the collected papers, nine topics were determined optimally and nine representative topics of the collected papers were extracted using the Gibbs sampling method. As a result, it was confirmed that artificial intelligence, big data, Internet of things(IoT), digital, network and so on have emerged as the major technologies, and it was confirmed that research has been conducted on the changes due to the major technologies in various fields related to the 4th industry such as industry, government, education field, and job.

Two-Path Language Modeling Considering Word Order Structure of Korean (한국어의 어순 구조를 고려한 Two-Path 언어모델링)

  • Shin, Joong-Hwi;Park, Jae-Hyun;Lee, Jung-Tae;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.8
    • /
    • pp.435-442
    • /
    • 2008
  • The n-gram model is appropriate for languages, such as English, in which the word-order is grammatically rigid. However, it is not suitable for Korean in which the word-order is relatively free. Previous work proposed a twoply HMM that reflected the characteristics of Korean but failed to reflect word-order structures among words. In this paper, we define a new segment unit which combines two words in order to reflect the characteristic of word-order among adjacent words that appear in verbal morphemes. Moreover, we propose a two-path language model that estimates probabilities depending on the context based on the proposed segment unit. Experimental results show that the proposed two-path language model yields 25.68% perplexity improvement compared to the previous Korean language models and reduces 94.03% perplexity for the prediction of verbal morphemes where words are combined.