• Title/Summary/Keyword: Word language model

Search Result 263, Processing Time 0.025 seconds

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

A Study on Word Sense Disambiguation Using Bidirectional Recurrent Neural Network for Korean Language

  • Min, Jihong;Jeon, Joon-Woo;Song, Kwang-Ho;Kim, Yoo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.41-49
    • /
    • 2017
  • Word sense disambiguation(WSD) that determines the exact meaning of homonym which can be used in different meanings even in one form is very important to understand the semantical meaning of text document. Many recent researches on WSD have widely used NNLM(Neural Network Language Model) in which neural network is used to represent a document into vectors and to analyze its semantics. Among the previous WSD researches using NNLM, RNN(Recurrent Neural Network) model has better performance than other models because RNN model can reflect the occurrence order of words in addition to the word appearance information in a document. However, since RNN model uses only the forward order of word occurrences in a document, it is not able to reflect natural language's characteristics that later words can affect the meanings of the preceding words. In this paper, we propose a WSD scheme using Bidirectional RNN that can reflect not only the forward order but also the backward order of word occurrences in a document. From the experiments, the accuracy of the proposed model is higher than that of previous method using RNN. Hence, it is confirmed that bidirectional order information of word occurrences is useful for WSD in Korean language.

A Methodology for Urdu Word Segmentation using Ligature and Word Probabilities

  • Khan, Yunus;Nagar, Chetan;Kaushal, Devendra S.
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.1
    • /
    • pp.24-31
    • /
    • 2012
  • This paper introduce a technique for Word segmentation for the handwritten recognition of Urdu script. Word segmentation or word tokenization is a primary technique for understanding the sentences written in Urdu language. Several techniques are available for word segmentation in other languages but not much work has been done for word segmentation of Urdu Optical Character Recognition (OCR) System. A method is proposed for word segmentation in this paper. It finds the boundaries of words in a sequence of ligatures using probabilistic formulas, by utilizing the knowledge of collocation of ligatures and words in the corpus. The word identification rate using this technique is 97.10% with 66.63% unknown words identification rate.

Retrieval Model Based on Word Translation Probabilities and the Degree of Association of Query Concept (어휘 번역확률과 질의개념연관도를 반영한 검색 모델)

  • Kim, Jun-Gil;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.183-188
    • /
    • 2012
  • One of the major challenge for retrieval performance is the word mismatch between user's queries and documents in information retrieval. To solve the word mismatch problem, we propose a retrieval model based on the degree of association of query concept and word translation probabilities in translation-based model. The word translation probabilities are calculated based on the set of a sentence and its succeeding sentence pair. To validate the proposed method, we experimented on TREC AP test collection. The experimental results show that the proposed model achieved significant improvement over the language model and outperformed translation-based language model.

N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient (정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응)

  • Choi Joon Ki;Oh Yung-Hwan
    • MALSORI
    • /
    • no.56
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

Donguibogam-Based Pattern Diagnosis Using Natural Language Processing and Machine Learning (자연어 처리 및 기계학습을 통한 동의보감 기반 한의변증진단 기술 개발)

  • Lee, Seung Hyeon;Jang, Dong Pyo;Sung, Kang Kyung
    • The Journal of Korean Medicine
    • /
    • v.41 no.3
    • /
    • pp.1-8
    • /
    • 2020
  • Objectives: This paper aims to investigate the Donguibogam-based pattern diagnosis by applying natural language processing and machine learning. Methods: A database has been constructed by gathering symptoms and pattern diagnosis from Donguibogam. The symptom sentences were tokenized with nouns, verbs, and adjectives with natural language processing tool. To apply symptom sentences into machine learning, Word2Vec model has been established for converting words into numeric vectors. Using the pair of symptom's vector and pattern diagnosis, a pattern prediction model has been trained through Logistic Regression. Results: The Word2Vec model's maximum performance was obtained by optimizing Word2Vec's primary parameters -the number of iterations, the vector's dimensions, and window size. The obtained pattern diagnosis regression model showed 75% (chance level 16.7%) accuracy for the prediction of Six-Qi pattern diagnosis. Conclusions: In this study, we developed pattern diagnosis prediction model based on the symptom and pattern diagnosis from Donguibogam. The prediction accuracy could be increased by the collection of data through future expansions of oriental medicine classics.

Characteristics analysis of Word Superiority Effect in Korean using Interactive Activation Model (Interactive Activation Model(IAM)을 이용한 한글에서의 Word Superiority Effect(WSE)특성 분석)

  • Park, Chang-Su;Bang, Sung-Yang
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.343-350
    • /
    • 1999
  • 본 논문은 한글에서 나타나는 Word Speriority Effect의 특성을 설명해 주는 한글의 글자 인지모델을 제안한다. 제안된 모델은 영어에서 나타나는 Word Superiority Effect를 설명하기 위해서 제안된 Interactive Activation Model을 기초로 한다. 우선은 영어에 맞도록 설계된 Interactive Activation Model을 한글에 적용할 수 있도록 수정하는 방법에 대해서 알아본다. 다음으로 한글에서 나타난 Word Superiority Effect의 특징과 그러한 특징을 기존의 Interactive Activation Model에 반영하기 위한 방법에 대해 알아본다. 제안된 방법을 이용해서 수정된 Interactive Activation Model을 컴퓨터로 구현해서 모의실험한 결과를 분석함으로써 제안된 모델의 타당성을 검증하게 된다.

  • PDF

A Concept Language Model combining Word Sense Information and BERT (의미 정보와 BERT를 결합한 개념 언어 모델)

  • Lee, Ju-Sang;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.3-7
    • /
    • 2019
  • 자연어 표상은 자연어가 가진 정보를 컴퓨터에게 전달하기 위해 표현하는 방법이다. 현재 자연어 표상은 학습을 통해 고정된 벡터로 표현하는 것이 아닌 문맥적 정보에 의해 벡터가 변화한다. 그 중 BERT의 경우 Transformer 모델의 encoder를 사용하여 자연어를 표상하는 기술이다. 하지만 BERT의 경우 학습시간이 많이 걸리며, 대용량의 데이터를 필요로 한다. 본 논문에서는 빠른 자연어 표상 학습을 위해 의미 정보와 BERT를 결합한 개념 언어 모델을 제안한다. 의미 정보로 단어의 품사 정보와, 명사의 의미 계층 정보를 추상적으로 표현했다. 실험을 위해 ETRI에서 공개한 한국어 BERT 모델을 비교 대상으로 하며, 개체명 인식을 학습하여 비교했다. 두 모델의 개체명 인식 결과가 비슷하게 나타났다. 의미 정보가 자연어 표상을 하는데 중요한 정보가 될 수 있음을 확인했다.

  • PDF

A study on the didactical application of ChatGPT for mathematical word problem solving (수학 문장제 해결과 관련한 ChatGPT의 교수학적 활용 방안 모색)

  • Kang, Yunji
    • Communications of Mathematical Education
    • /
    • v.38 no.1
    • /
    • pp.49-67
    • /
    • 2024
  • Recent interest in the diverse applications of artificial intelligence (AI) language models has highlighted the need to explore didactical uses in mathematics education. AI language models, capable of natural language processing, show promise in solving mathematical word problems. This study tested the capability of ChatGPT, an AI language model, to solve word problems from elementary school textbooks, and analyzed both the solutions and errors made. The results showed that the AI language model achieved an accuracy rate of 81.08%, with errors in problem comprehension, equation formulation, and calculation. Based on this analysis of solution processes and error types, the study suggests implications for the didactical application of AI language models in education.

Two-Path Language Modeling Considering Word Order Structure of Korean (한국어의 어순 구조를 고려한 Two-Path 언어모델링)

  • Shin, Joong-Hwi;Park, Jae-Hyun;Lee, Jung-Tae;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.8
    • /
    • pp.435-442
    • /
    • 2008
  • The n-gram model is appropriate for languages, such as English, in which the word-order is grammatically rigid. However, it is not suitable for Korean in which the word-order is relatively free. Previous work proposed a twoply HMM that reflected the characteristics of Korean but failed to reflect word-order structures among words. In this paper, we define a new segment unit which combines two words in order to reflect the characteristic of word-order among adjacent words that appear in verbal morphemes. Moreover, we propose a two-path language model that estimates probabilities depending on the context based on the proposed segment unit. Experimental results show that the proposed two-path language model yields 25.68% perplexity improvement compared to the previous Korean language models and reduces 94.03% perplexity for the prediction of verbal morphemes where words are combined.