• Title/Summary/Keyword: language models

Search Result 885, Processing Time 0.027 seconds

Korean LVCSR for Broadcast News Speech

  • Lee, Gang-Seong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2E
    • /
    • pp.3-8
    • /
    • 2001
  • In this paper, we will examine a Korean large vocabulary continuous speech recognition (LVCSR) system for broadcast news speech. The combined vowel and implosive unit is included in a phone set together with other short phone units in order to obtain a longer unit acoustic model. The effect of this unit is compared with conventional phone units. The dictionary units for language processing are automatically extracted from eojeols appearing in transcriptions. Triphone models are used for acoustic modeling and a trigram model is used for language modeling. Among three major speaker groups in news broadcasts-anchors, journalists and people (those other than anchors or journalists, who are being interviewed), the speech of anchors and journalists, which has a lot of noise, was used for testing and recognition.

  • PDF

Some (Re)views on ELT Research: With Reference to World Englishes and/or English Lingua Franca

  • Cho, Myongwon
    • Korean Journal of English Language and Linguistics
    • /
    • v.2 no.1
    • /
    • pp.123-147
    • /
    • 2002
  • As far as the recent ELT research concerned, it seems to have been no hot ‘theoretical’ issues, but ‘practical’ ones in general: e.g., learners and learning, components of proficiency, correlates of L2 learning, etc. This paper focuses on the theme given above, with a special reference to the sub-title: specifically, 1) World English, world Englishes and world's lingua franca; 2) ENL, ESL and EFL; 3) Grammars, style manuals, dictionaries and media; 4) Pronunciation models: RP, BBC model and General American, Network Standard; 5) Lexical, grammatical variations and discourse grammars; 6) Beliefs and subjective theories in foreign language research; 7) Dilemma among radical, canonical and eclectic views. In conclusion, the author offers a modest proposal: we need to appeal to our own experience, intention, feeling and purpose, that is, our identity to express “our own selves” in our contexts toward the world anywhere, if not sounding authentic enough, but producing it plausibly well. It is time for us (with our ethno-cultural autonomy) to need to be complementary to and parallel with its native speakers' linguistic-cultural authenticity in terms of the broadest mutual understanding.

  • PDF

Reranking Search Results for Mathematical Equation Retrieval Using Topic Models (토픽 모델을 이용한 수학식 검색 결과 재랭킹)

  • Yang, Seon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.77-81
    • /
    • 2013
  • 본 논문은 두 가지 주제에 대해 연구한다. 첫 번째는 수학식 검색에 대한 것이다. 웹에는 양질의 수학식 데이터가 마크업 언어 형태로 저장되어 있으며 이를 활용하기 위한 연구들이 활발히 진행되고 있다. 본 연구에서는 MathML (Mathematical Markup Language)로 저장된 수학식 데이터를 일반 질의어를 이용하여 검색한다. 두 번째 주제는 토픽 모델(topic model)로 검색 성능을 향상시키는 방법에 대한 것이다. 먼저 수학식 데이터를 일반 자연어 문장으로 변환한 후 Indri 시스템을 이용하여 검색을 수행하고, 토픽 모델을 이용하여 미리 산출된 스코어를 적용하여 검색 순위를 재랭킹한 결과, MRR 기준 평균 5%의 성능을 향상시킬 수 있었다.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

Computerization and Application of the Korean Standard Pronunciation Rules (한국어 표준발음법의 전산화 및 응용)

  • 이계영;임재걸
    • Language and Information
    • /
    • v.7 no.2
    • /
    • pp.81-101
    • /
    • 2003
  • This paper introduces a computerized version of the Korean Standard Pronunciation Rules that can be used in speech engineering systems such as Korean speech synthesis and recognition systems. For this purpose, we build Petri net models for each item of the Standard Pronunciation Rules, and then integrate them into the sound conversion table. The reversion of the Korean Standard Pronunciation Rules regulates the way of matching sounds into grammatically correct written characters. This paper presents not only the sound conversion table but also the character conversion table obtained by reversely converting the sound conversion table. Malting use of these tables, we have implemented a Korean character into a sound system and a Korean sound into the character conversion system, and tested them with various data sets reflecting all the items of the Standard Pronunciation Rules to verify the soundness and completeness of our tables. The test results show that the tables improve the process speed in addition to the soundness and completeness.

  • PDF

A Survey of Arabic Thematic Sentiment Analysis Based on Topic Modeling

  • Basabain, Seham
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.155-162
    • /
    • 2021
  • The expansion of the world wide web has led to a huge amount of user generated content over different forums and social media platforms, these rich data resources offer the opportunity to reflect, and track changing public sentiments and help to develop proactive reactions strategies for decision and policy makers. Analysis of public emotions and opinions towards events and sentimental trends can help to address unforeseen areas of public concerns. The need of developing systems to analyze these sentiments and the topics behind them has emerged tremendously. While most existing works reported in the literature have been carried out in English, this paper, in contrast, aims to review recent research works in Arabic language in the field of thematic sentiment analysis and which techniques they have utilized to accomplish this task. The findings show that the prevailing techniques in Arabic topic-based sentiment analysis are based on traditional approaches and machine learning methods. In addition, it has been found that considerably limited recent studies have utilized deep learning approaches to build high performance models.

Progress, challenges, and future perspectives in genetic researches of stuttering

  • Kang, Changsoo
    • Journal of Genetic Medicine
    • /
    • v.18 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Speech and language functions are highly cognitive and human-specific features. The underlying causes of normal speech and language function are believed to reside in the human brain. Developmental persistent stuttering, a speech and language disorder, has been regarded as the most challenging disorder in determining genetic causes because of the high percentage of spontaneous recovery in stutters. This mysterious characteristic hinders speech pathologists from discriminating recovered stutters from completely normal individuals. Over the last several decades, several genetic approaches have been used to identify the genetic causes of stuttering, and remarkable progress has been made in genome-wide linkage analysis followed by gene sequencing. So far, four genes, namely GNPTAB, GNPTG, NAGPA, and AP4E1, are known to cause stuttering. Furthermore, thegeneration of mouse models of stuttering and morphometry analysis has created new ways for researchers to identify brain regions that participate in human speech function and to understand the neuropathology of stuttering. In this review, we aimed to investigate previous progress, challenges, and future perspectives in understanding the genetics and neuropathology underlying persistent developmental stuttering.

Multi-task learning with contextual hierarchical attention for Korean coreference resolution

  • Cheoneum Park
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.93-104
    • /
    • 2023
  • Coreference resolution is a task in discourse analysis that links several headwords used in any document object. We suggest pointer networks-based coreference resolution for Korean using multi-task learning (MTL) with an attention mechanism for a hierarchical structure. As Korean is a head-final language, the head can easily be found. Our model learns the distribution by referring to the same entity position and utilizes a pointer network to conduct coreference resolution depending on the input headword. As the input is a document, the input sequence is very long. Thus, the core idea is to learn the word- and sentence-level distributions in parallel with MTL, while using a shared representation to address the long sequence problem. The suggested technique is used to generate word representations for Korean based on contextual information using pre-trained language models for Korean. In the same experimental conditions, our model performed roughly 1.8% better on CoNLL F1 than previous research without hierarchical structure.

Attention Patterns and Semantics of Korean Language Models (한국어 언어모델 주의집중 패턴과 의미적 대표성)

  • Yang, Kisu;Jang, Yoonna;Lim, Jungwoo;Park, Chanjun;Jang, Hwanseok;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.605-608
    • /
    • 2021
  • KoBERT는 한국어 자연어처리 분야에서 우수한 성능과 확장성으로 인해 높은 위상을 가진다. 하지만 내부에서 이뤄지는 연산과 패턴에 대해선 아직까지 많은 부분이 소명되지 않은 채 사용되고 있다. 본 연구에서는 KoBERT의 핵심 요소인 self-attention의 패턴을 4가지로 분류하며 특수 토큰에 가중치가 집중되는 현상을 조명한다. 특수 토큰의 attention score를 층별로 추출해 변화 양상을 보이고, 해당 토큰의 역할을 attention 매커니즘과 연관지어 해석한다. 이를 뒷받침하기 위해 한국어 분류 작업에서의 실험을 수행하고 정량적 분석과 함께 특수 토큰이 갖는 의미론적 가치를 평가한다.

  • PDF

Semi-Supervised Data Augmentation Method for Korean Fact Verification Using Generative Language Models (자연어 생성 모델을 이용한 준지도 학습 기반 한국어 사실 확인 자료 구축)

  • Jeong, Jae-Hwan;Jeon, Dong-Hyeon;Kim, Seon-Hun;Gang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.105-111
    • /
    • 2021
  • 한국어 사실 확인 과제는 학습 자료의 부재로 인해 연구에 어려움을 겪고 있다. 본 논문은 수작업으로 구성된 학습 자료를 토대로 자연어 생성 모델을 이용하여 한국어 사실 확인 자료를 구축하는 방법을 제안한다. 본 연구는 임의의 근거를 기반으로 하는 주장을 생성하는 방법 (E2C)과 임의의 주장을 기반으로 근거를 생성하는 방법 (C2E)을 모두 실험해보았다. 이때 기존 학습 자료에 위 두 학습 자료를 각각 추가하여 학습한 사실 확인 분류기가 기존의 학습 자료나 영문 사실 확인 자료 FEVER를 국문으로 기계 번역한 학습 자료를 토대로 구성된 분류기보다 평가 자료에 대해 높은 성능을 기록하였다. 또한, C2E 방법의 경우 수작업으로 구성된 자료 없이 기존의 자연어 추론 과제 자료와 HyperCLOVA Few Shot 예제만으로도 높은 성능을 기록하여, 비지도 학습 방식으로 사실 확인 자료를 구축할 수 있는 가능성 역시 확인하였다.

  • PDF