• Title/Summary/Keyword: Lexical model

Search Result 100, Processing Time 0.02 seconds

Orthographic and phonological links in Korean lexical processing (한국어 어휘 처리 과정에서 글짜 정보와 발음 정보의 연결성)

  • Kim, Jee-Sun;Taft, Marcus
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.211-214
    • /
    • 1995
  • At what level of orthographic representation is phonology linked in thelexicon? Is it at the whole word level, the syllable level, letter level, etc? This question can be addressed by comparing the two scripts used in Korean, logographic Hanmoon and alphabetic/syllabic Hangul, on a task where judgements must be made about the phonology of a visually presented word. Four experiments are reported using a "homophone decision task" and manipulating the sub-lexical relationship between orthography and phonology in Hanmoon and Hangul, and the lexical status of the stimuli. Hangul words showed a much higher error rate in judging whether there was another word identically pronounced than both Hangul nonwords and Hanmoon words. It is concluded that the relationship between orthography and phonology in the lexicon differs according tn the type of script owing to the availability of sub-lexical information: the process of making a homophone derision is based on a spread of activation exclusively among lexical entries, from orthography to phonology and vice versa (called "Orthography-Phonology-Orthography Rebound" or "OPO Rebound"). The results are explained within the mulitilevel interactive activation model with orthographic units linked to phonological units at each level.

  • PDF

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

Entity Matching Method Using Semantic Similarity and Graph Convolutional Network Techniques (의미적 유사성과 그래프 컨볼루션 네트워크 기법을 활용한 엔티티 매칭 방법)

  • Duan, Hongzhou;Lee, Yongju
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.801-808
    • /
    • 2022
  • Research on how to embed knowledge in large-scale Linked Data and apply neural network models for entity matching is relatively scarce. The most fundamental problem with this is that different labels lead to lexical heterogeneity. In this paper, we propose an extended GCN (Graph Convolutional Network) model that combines re-align structure to solve this lexical heterogeneity problem. The proposed model improved the performance by 53% and 40%, respectively, compared to the existing embedded-based MTransE and BootEA models, and improved the performance by 5.1% compared to the GCN-based RDGCN model.

Exploring the feasibility of fine-tuning large-scale speech recognition models for domain-specific applications: A case study on Whisper model and KsponSpeech dataset

  • Jungwon Chang;Hosung Nam
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.83-88
    • /
    • 2023
  • This study investigates the fine-tuning of large-scale Automatic Speech Recognition (ASR) models, specifically OpenAI's Whisper model, for domain-specific applications using the KsponSpeech dataset. The primary research questions address the effectiveness of targeted lexical item emphasis during fine-tuning, its impact on domain-specific performance, and whether the fine-tuned model can maintain generalization capabilities across different languages and environments. Experiments were conducted using two fine-tuning datasets: Set A, a small subset emphasizing specific lexical items, and Set B, consisting of the entire KsponSpeech dataset. Results showed that fine-tuning with targeted lexical items increased recognition accuracy and improved domain-specific performance, with generalization capabilities maintained when fine-tuned with a smaller dataset. For noisier environments, a trade-off between specificity and generalization capabilities was observed. This study highlights the potential of fine-tuning using minimal domain-specific data to achieve satisfactory results, emphasizing the importance of balancing specialization and generalization for ASR models. Future research could explore different fine-tuning strategies and novel technologies such as prompting to further enhance large-scale ASR models' domain-specific performance.

A Hidden Markov Model Imbedding Multiword Units for Part-of-Speech Tagging

  • Kim, Jae-Hoon;Jungyun Seo
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.7-13
    • /
    • 1997
  • Morphological Analysis of Korean has known to be a very complicated problem. Especially, the degree of part-of-speech(POS) ambiguity is much higher than English. Many researchers have tried to use a hidden Markov model(HMM) to solve the POS tagging problem and showed arround 95% correctness ratio. However, the lack of lexical information involves a hidden Markov model for POS tagging in lots of difficulties in improving the performance. To alleviate the burden, this paper proposes a method for combining multiword units, which are types of lexical information, into a hidden Markov model for POS tagging. This paper also proposes a method for extracting multiword units from POS tagged corpus. In this paper, a multiword unit is defined as a unit which consists of more than one word. We found that these multiword units are the major source of POS tagging errors. Our experiment shows that the error reduction rate of the proposed method is about 13%.

  • PDF

Morpheme-based Korean broadcast news transcription (형태소 기반의 한국어 방송뉴스 인식)

  • Park Young-Hee;Ahn Dong-Hoon;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.123-126
    • /
    • 2002
  • In this paper, we describe our LVCSR system for Korean broadcast news transcription. The main focus is to find the most proper morpheme-based lexical model for Korean broadcast news recognition to deal with the inflectional flexibilities in Korean. There are trade-offs between lexicon size and lexical coverage, and between the length of lexical unit and WER. In our system, we analyzed the training corpus to obtain a small 24k-morpheme-based lexicon with 98.8% coverage. Then, the lexicon is optimized by combining morphemes using statistics of training corpus under monosyllable constraint or maximum length constraint. In experiments, our system reduced the number of monosyllable morphemes from 52% to 29% of the lexicon and obtained 13.24% WER for anchor and 24.97% for reporter.

  • PDF

An Automatic Korean Lexical Acquisition System (한국어 어휘자동획득 시스템)

  • Lim, Heui-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.5
    • /
    • pp.1087-1091
    • /
    • 2007
  • This paper proposes a automatic korean lexical acquisition system which reflects the characteristics of human language acquisition. The proposed system automatically builds two kinds of lexicon, full-form lexicon and decomposition using Korean corpus as its input. As the experimental results using Korean Sejeong corpus of which size is 10 million Eojeols, the system acquired 2,097 full-form Eojeols and 3,488 morphemes. The accumulated frequency of the acquired full-form Eojeols covers the 38.63% of the input corpus and accuracy of morpheme acquisition is 99.87%.

  • PDF

Perceptual weighting on English lexical stress by Korean learners of English

  • Goun Lee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.19-24
    • /
    • 2022
  • This study examined which acoustic cue(s) that Korean learners of English give weight to in perceiving English lexical stress. We manipulated segmental and suprasegmental cues in 5 steps in the first and second syllables of an English stress minimal pair "object". A total of 27 subjects (14 native speakers of English and 13 Korean L2 learners) participated in the English stress judgment task. The results revealed that native Korean listeners used the F0 and intensity cues in identifying English stress and weighted vowel quality most strongly, as native English listeners did. These results indicate that Korean learners' experience with these cues in L1 prosody can help them attend to these cues in their L2 perception. However, L2 learners' perceptual attention is not entirely predicted by their linguistic experience with specific acoustic cues in their native language.

Improvement and Evaluation of the Korean Large Vocabulary Continuous Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼(ECHOS)의 개선 및 평가)

  • Kwon, Suk-Bong;Yun, Sung-Rack;Jang, Gyu-Cheol;Kim, Yong-Rae;Kim, Bong-Wan;Kim, Hoi-Rin;Yoo, Chang-Dong;Lee, Yong-Ju;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.59
    • /
    • pp.53-68
    • /
    • 2006
  • We report the evaluation results of the Korean speech recognition platform called ECHOS. The platform has an object-oriented and reusable architecture so that researchers can easily evaluate their own algorithms. The platform has all intrinsic modules to build a large vocabulary speech recognizer: Noise reduction, end-point detection, feature extraction, hidden Markov model (HMM)-based acoustic modeling, cross-word modeling, n-gram language modeling, n-best search, word graph generation, and Korean-specific language processing. The platform supports both lexical search trees and finite-state networks. It performs word-dependent n-best search with bigram in the forward search stage, and rescores the lattice with trigram in the backward stage. In an 8000-word continuous speech recognition task, the platform with a lexical tree increases 40% of word errors but decreases 50% of recognition time compared to the HTK platform with flat lexicon. ECHOS reduces 40% of recognition errors through incorporation of cross-word modeling. With the number of Gaussian mixtures increasing to 16, it yields word accuracy comparable to the previous lexical tree-based platform, Julius.

  • PDF

A Study on Semantic Based Indexing and Fuzzy Relevance Model (의미기반 인덱스 추출과 퍼지검색 모델에 관한 연구)

  • Kang, Bo-Yeong;Kim, Dae-Won;Gu, Sang-Ok;Lee, Sang-Jo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.238-240
    • /
    • 2002
  • If there is an Information Retrieval system which comprehends the semantic content of documents and knows the preference of users. the system can search the information better on the Internet, or improve the IR performance. Therefore we propose the IR model which combines semantic based indexing and fuzzy relevance model. In addition to the statistical approach, we chose the semantic approach in indexing, lexical chains, because we assume it would improve the performance of the index term extraction. Furthermore, we combined the semantic based indexing with the fuzzy model, which finds out the exact relevance of the user preference and index terms. The proposed system works as follows: First, the presented system indexes documents by the efficient index term extraction method using lexical chains. And then, if a user tends to retrieve the information from the indexed document collection, the extended IR model calculates and ranks the relevance of user query. user preference and index terms by some metrics. When we experimented each module, semantic based indexing and extended fuzzy model. it gave noticeable results. The combination of these modules is expected to improve the information retrieval performance.

  • PDF