• Title/Summary/Keyword: OOV 단어

Search Result 13, Processing Time 0.019 seconds

Sentiment Analysis using Robust Parallel Tri-LSTM Sentence Embedding in Out-of-Vocabulary Word (Out-of-Vocabulary 단어에 강건한 병렬 Tri-LSTM 문장 임베딩을 이용한 감정분석)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.16-24
    • /
    • 2021
  • The exiting word embedding methodology such as word2vec represents words, which only occur in the raw training corpus, as a fixed-length vector into a continuous vector space, so when mapping the words incorporated in the raw training corpus into a fixed-length vector in morphologically rich language, out-of-vocabulary (OOV) problem often happens. Even for sentence embedding, when representing the meaning of a sentence as a fixed-length vector by synthesizing word vectors constituting a sentence, OOV words make it challenging to meaningfully represent a sentence into a fixed-length vector. In particular, since the agglutinative language, the Korean has a morphological characteristic to integrate lexical morpheme and grammatical morpheme, handling OOV words is an important factor in improving performance. In this paper, we propose parallel Tri-LSTM sentence embedding that is robust to the OOV problem by extending utilizing the morphological information of words into sentence-level. As a result of the sentiment analysis task with corpus in Korean, we empirically found that the character unit is better than the morpheme unit as an embedding unit for Korean sentence embedding. We achieved 86.17% accuracy on the sentiment analysis task with the parallel bidirectional Tri-LSTM sentence encoder.

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

A Study on Utterance Verification Using Accumulation of Negative Log-likelihood Ratio (음의 유사도 비율 누적 방법을 이용한 발화검증 연구)

  • 한명희;이호준;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.194-201
    • /
    • 2003
  • In speech recognition, confidence measuring is to decide whether it can be accepted as the recognized results or not. The confidence is measured by integrating frames into phone and word level. In case of word recognition, the confidence measuring verifies the results of recognition and Out-Of-Vocabulary (OOV). Therefore, the post-processing could improve the performance of recognizer without accepting it as a recognition error. In this paper, we measure the confidence modifying log likelihood ratio (LLR) which was the previous confidence measuring. It accumulates only those which the log likelihood ratio is negative when integrating the confidence to phone level from frame level. When comparing the verification performance for the results of word recognizer with the previous method, the FAR (False Acceptance Ratio) is decreased about 3.49% for the OOV and 15.25% for the recognition error when CAR (Correct Acceptance Ratio) is about 90%.

A Out-of-vocabulary Processing Technology for the Spoken Language Understanding Module of a Dialogue Based Private Secretary Software (대화형 개인 비서 시스템의 언어 인식 모듈(SLU)을 위한 미등록어(OOV) 처리 기술)

  • Lee, ChangSu;Ko, YoungJoong
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.3-8
    • /
    • 2014
  • 대화형 개인 비서 시스템은 사람의 음성을 통해 인식된 음성 인식 결과를 분석하여 사용자에게 제공할 정보가 무엇인지 파악한 후, 정보가 포함되어 있는 앱(app)을 실행시켜 사용자가 원하는 정보를 제공하는 시스템이다. 이러한 대화형 개인 비서 시스템의 가장 중요한 모듈 중 하나는 음성 대화 인식 모듈(SLU: Spoken Language Understanding)이며, 발화의 "의미 분석"을 수행하는 모듈이다. 본 논문은 음성 인식결과가 잘못되어 의미 분석이 실패하는 것을 방지하기 위하여 음성 인식 결과에서 잘못 인식된 명사, 개체명 단어를 보정 시켜주는 미등록어(OOV:Out-of-vocabulary) 처리 모듈을 제안한다. 제안하는 미등록어 처리 모듈은 미등록어 탐색 모듈과 미등록어 변환 모듈로 구성되며, 미등록어 탐색 모듈을 통해 사용자의 발화에서 미등록어를 분류하고, 미등록어 변환 모듈을 통해 미등록어를 사전에 존재하는 유사한 단어로 변환하는 방법을 제안한다. 제안한 방법을 적용하였을 때의 실험 결과, 전체 미등록어 중 최대 52.5%가 올바르게 수정되었으며, 음성 인식 결과를 그대로 사용했을 경우 "원본 문장"과 문장 단위 67.6%의 일치율을 보인 것에 반해 미등록어 처리 모듈을 적용했을 때 17.4% 개선된 최대 85%의 문장 단위 일치율을 보였다.

  • PDF

Research on Subword Tokenization of Korean Neural Machine Translation and Proposal for Tokenization Method to Separate Jongsung from Syllables (한국어 인공신경망 기계번역의 서브 워드 분절 연구 및 음절 기반 종성 분리 토큰화 제안)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • Since Neural Machine Translation (NMT) uses only a limited number of words, there is a possibility that words that are not registered in the dictionary will be entered as input. The proposed method to alleviate this Out of Vocabulary (OOV) problem is Subword Tokenization, which is a methodology for constructing words by dividing sentences into subword units smaller than words. In this paper, we deal with general subword tokenization algorithms. Furthermore, in order to create a vocabulary that can handle the infinite conjugation of Korean adjectives and verbs, we propose a new methodology for subword tokenization training by separating the Jongsung(coda) from Korean syllables (consisting of Chosung-onset, Jungsung-neucleus and Jongsung-coda). As a result of the experiment, the methodology proposed in this paper outperforms the existing subword tokenization methodology.

Conditional Random Fields based Named Entity Recognition Using Korean Lexical Semantic Network (한국어 어휘의미망을 활용한 Conditional Random Fields 기반 한국어 개체명 인식)

  • Park, Seo-Yeon;Ock, Cheol-Young;Shin, Joon-Choul
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.343-346
    • /
    • 2020
  • 개체명 인식은 주어진 문장 내에서 OOV(Out of Vocaburary)로 자주 등장하는 고유한 의미가 있는 단어들을 미리 정의된 개체의 범주로 분류하는 작업이다. 최근 개체명이 문장 내에서 OOV로 등장하는 문제를 해결하기 위해 외부 리소스를 활용하는 연구들이 많이 진행되었다. 본 논문은 의미역, 의존관계 분석에 한국어 어휘지도를 이용한 자질을 추가하여 성능 향상을 보인 연구들을 바탕으로 이를 한국어 개체명 인식에 적용하고 평가하였다. 실험 결과, 한국어 어휘지도를 활용한 자질을 추가로 학습한 모델이 기존 모델에 비해 평균 1.83% 포인트 향상하였다. 또한, CRF 단일 모델만을 사용했음에도 87.25% 포인트라는 높은 성능을 보였다.

  • PDF

Automatic Text Summarization based on Selective Copy mechanism against for Addressing OOV (미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약)

  • Lee, Tae-Seok;Seon, Choong-Nyoung;Jung, Youngim;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.58-65
    • /
    • 2019
  • Automatic text summarization is a process of shortening a text document by either extraction or abstraction. The abstraction approach inspired by deep learning methods scaling to a large amount of document is applied in recent work. Abstractive text summarization involves utilizing pre-generated word embedding information. Low-frequent but salient words such as terminologies are seldom included to dictionaries, that are so called, out-of-vocabulary(OOV) problems. OOV deteriorates the performance of Encoder-Decoder model in neural network. In order to address OOV words in abstractive text summarization, we propose a copy mechanism to facilitate copying new words in the target document and generating summary sentences. Different from the previous studies, the proposed approach combines accurate pointing information and selective copy mechanism based on bidirectional RNN and bidirectional LSTM. In addition, neural network gate model to estimate the generation probability and the loss function to optimize the entire abstraction model has been applied. The dataset has been constructed from the collection of abstractions and titles of journal articles. Experimental results demonstrate that both ROUGE-1 (based on word recall) and ROUGE-L (employed longest common subsequence) of the proposed Encoding-Decoding model have been improved to 47.01 and 29.55, respectively.

Bayesian Fusion of Confidence Measures for Confidence Scoring (베이시안 신뢰도 융합을 이용한 신뢰도 측정)

  • 김태윤;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.410-419
    • /
    • 2004
  • In this paper. we propose a method of confidence measure fusion under Bayesian framework for speech recognition. Centralized and distributed schemes are considered for confidence measure fusion. Centralized fusion is feature level fusion which combines the values of individual confidence scores and makes a final decision. In contrast. distributed fusion is decision level fusion which combines the individual decision makings made by each individual confidence measuring method. Optimal Bayesian fusion rules for centralized and distributed cases are presented. In isolated word Out-of-Vocabulary (OOV) rejection experiments. centralized Bayesian fusion shows over 13% relative equal error rate (EER) reduction compared with the individual confidence measure methods. In contrast. the distributed Bayesian fusion shows no significant performance increase.

An Implementation of Rejection Capabilities in the Isolated Word Recognition System (고립단어 인식 시스템에서의 거절기능 구현)

  • Kim, Dong-Hwa;Kim, Hyung-Soon;Kim, Young-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.106-109
    • /
    • 1997
  • For the practical isolated word recognition system, the ability to reject the out-of -vocabulary(OOV) is required. In this paper, we present a rejection method which uses the clustered phoneme modeling combined with postprocessing by likelihood ratio scoring. Our baseline speech recognition system was based on the whole-word continuous HMM. And 6 clustered phoneme models were generated using statistical method from the 45 context independent phoneme models, which were trained using the phonetically balanced speech database. The test of the rejection performance for speaker independent isolated words recogntion task on the 22 section names shows that our method is superior to the conventional postprocessing method, performing the rejection according to the likelihood difference between the first and second candidates. Furthermore, this clustered phoneme models do not require retraining for the other isolated word recognition system with different vocabulary sets.

  • PDF

Efficient Subword Segmentation for Korean Language Classification (한국어 분류를 위한 효율적인 서브 워드 분절)

  • Hyunjin Seo;Jeongjae Nam;Minseok Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.535-540
    • /
    • 2022
  • Out of Vocabulary(OOV) 문제는 인공신경망 기계번역(Neural Machine Translation, NMT)에서 빈번히 제기되어 왔다. 이를 해결하기 위해, 기존에는 단어를 효율적인 압축할 수 있는 Byte Pair Encoding(BPE)[1]이 대표적으로 이용되었다. 하지만 BPE는 빈도수를 기반으로 토큰화가 진행되는 결정론적 특성을 취하고 있기에, 다양한 문장에 관한 일반화된 분절 능력을 함양하기 어렵다. 이를 극복하기 위해 최근 서브 워드를 정규화하는 방법(Subword Regularization)이 제안되었다. 서브 워드 정규화는 동일한 단어 안에서 발생할 수 있는 다양한 분절 경우의 수를 고려하도록 설계되어 다수의 실험에서 우수한 성능을 보였다. 그러나 분류 작업, 특히 한국어를 대상으로 한 분류에 있어서 서브 워드 정규화를 적용한 사례는 아직까지 확인된 바가 없다. 이를 위해 본 논문에서는 서브 워드 정규화를 대표하는 두 가지 방법인 유니그램 기반 서브 워드 정규화[2]와 BPE-Dropout[3]을 이용해 한국어 분류 문제에 대한 서브 워드 정규화의 효과성을 제안한다. NMT 뿐만 아니라 분류 문제 역시 단어의 구성성 및 그 의미를 파악하는 것은 각 문장이 속하는 클래스를 결정하는데 유의미한 기여를 한다. 더불어 서브 워드 정규화는 한국어의 문장 구성 요소에 관해 폭넓은 인지능력을 함양할 수 있다. 해당 방법은 본고에서 진행한 한국어 분류 과제 실험에서 기존 BPE 대비 최대 4.7% 높은 성능을 거두었다.

  • PDF