• 제목/요약/키워드: language model adaptation

검색결과 42건 처리시간 0.022초

정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응 (N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient)

  • 최준기;오영환
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • 제38권3호
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

사용자 적응을 통한 한국 수화 인식 시스템의 개선 (Improvement of Korean Sign Language Recognition System by User Adaptation)

  • 정성훈;박광현;변증남
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.301-303
    • /
    • 2007
  • This paper presents user adaptation methods to overcome limitations of a user-independent model and a user-dependent model in a Korean sign language recognition system. To adapt model parameters for unobserved states in hidden Markov models, we introduce new methods based on motion similarity and prediction from adaptation history so that we can achieve faster adaption and higher recognition rates comparing with previous methods.

  • PDF

중국 유학생의 한국 대학생활 적응 예측모형 (A Prediction Model on Adaptation to University Life among Chinese International Students in Korea)

  • 림금란;김희경
    • 한국간호교육학회지
    • /
    • 제17권3호
    • /
    • pp.501-513
    • /
    • 2011
  • Purpose: On the basis of the theoretical framework of a combination of Roy's adaptation theory and Lazarus & Folkman's theory of stress - appraise coping, the purpose of this study was to predict effect factors of adaptation to university life of Chinese international students in Korea. After this, a model of adaptation to university life of Chinese international students in Korea was constructed. Methods: A questionnaire was used to survey 369 Chinese international students from one university in Korea, which was analyzed by using PASW Statistics 18.0 and LISREL 8.7. Results: This theoretical model explained adaptation to university life of Chinese international students at 75.0% in Korea. Physical symptoms, loneliness, acculturation stress and self-efficacy directly affected the adaptation to university life. Korean language proficiency indirectly affected adaptation to university life through self-efficacy. Conclusion: Results of this study provided theoretical basis for the future health care of university- centered health centers. For improving adaptation to university life of Chinese international students in Korea, education and nursing measures for reducing physical symptoms, loneliness and acculturation stress, and improving Korean language proficiency and self-efficacy are proposed for further research and development.

대화체 연속음성 인식을 위한 언어모델 적응 (Language Model Adaptation for Conversational Speech Recognition)

  • 박영희;정민화
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

Probing Sentence Embeddings in L2 Learners' LSTM Neural Language Models Using Adaptation Learning

  • Kim, Euhee
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권3호
    • /
    • pp.13-23
    • /
    • 2022
  • Prasad et al.는 사전학습(pre-trained)한 신경망 L1 글로다바(Gulordava) 언어모델을 여러 유형의 영어 관계절과 등위절 문장들로 적응 학습(adaptation learning)시켜 문장 간 유사성(sentence similarity)을 평가할 수 있는 통사 프라이밍(syntactic priming)-기반 프로빙 방법((probing method)을 제안했다. 본 논문에서는 한국인 영어학습자가 배우는 영어 자료를 바탕으로 훈련된 L2 LSTM 신경망 언어 모델의 영어 관계절 혹은 등위절 구조의 문장들에 대한 임베딩 표현 방식을 평가하기 위하여 프로빙 방법을 적용한다. 프로빙 실험은 사전 학습한 LSTM 언어 모델을 기반으로 추가로 적응 학습을 시킨 LSTM 언어 모델을 사용하여 문장 임베딩 벡터 표현의 통사적 속성을 추적한다. 이 프로빙 실험을 위한 데이터셋은 문장의 통사 구조를 생성하는 템플릿을 사용하여 자동으로 구축했다. 특히, 프로빙 과제별 문장의 통사적 속성을 분류하기 위해 통사 프라이밍을 이용한 언어 모델의 적응 효과(adaptation effect)를 측정했다. 영어 문장에 대한 언어 모델의 적응 효과와 통사적 속성 관계를 복합적으로 통계분석하기 위해 선형 혼합효과 모형(linear mixed-effects model) 분석을 수행했다. 제안한 L2 LSTM 언어 모델이 베이스라인 L1 글로다바 언어 모델과 비교했을 때, 프로빙 과제별 동일한 양상을 공유함을 확인했다. 또한 L2 LSTM 언어 모델은 다양한 관계절 혹은 등위절이 있는 문장들을 임베딩 표현할 때 관계절 혹은 등위절 세부 유형별로 통사적 속성에 따라 계층 구조로 구분하고 있음을 확인했다.

한국어 음성데이터를 이용한 일본어 음향모델 성능 개선 (An Enhancement of Japanese Acoustic Model using Korean Speech Database)

  • 이민규;김상훈
    • 한국음향학회지
    • /
    • 제32권5호
    • /
    • pp.438-445
    • /
    • 2013
  • 본 논문은 일본어 음성인식기 신규 개발을 위해 초기에 부족한 일본어 음성데이터를 보완하는 방법이다. 일본어 발음과 한국어 발음이 유사한 특성을 근거로 한국어 음성 데이터를 이용한 일본어 음향모델 성능개선 방법에 대하여 기술하였다. 이종언어 간 음성 데이터를 섞어서 훈련하는 방법인 Cross-Language Transfer, Cross-Language Adaptation, Data Pooling Approach등 방법을 설명하고, 각 방법들의 시뮬레이션을 통해 현재 보유하고 있는 일본어 음성데이터 양에 적절한 방법을 선정하였다. 기존의 방법들은 훈련용 음성데이터가 크게 부족한 환경에서의 효과는 검증되었으나, 목적 언어의 데이터가 어느 정도 확보된 상태에서는 성능 개선 효과가 미비하였다. 그러나 Data Pooling Approach의 훈련과정 중 Tyied-List를 목적 언어로만으로 구성 하였을 때, ERR(Error Reduction Rate)이 12.8 %로 성능이 향상됨을 확인하였다.

작물 수확 자동화를 위한 시각 언어 모델 기반의 환경적응형 과수 검출 기술 (Domain Adaptive Fruit Detection Method based on a Vision-Language Model for Harvest Automation)

  • 남창우;송지민;진용식;이상준
    • 대한임베디드공학회논문지
    • /
    • 제19권2호
    • /
    • pp.73-81
    • /
    • 2024
  • Recently, mobile manipulators have been utilized in agriculture industry for weed removal and harvest automation. This paper proposes a domain adaptive fruit detection method for harvest automation, by utilizing OWL-ViT model which is an open-vocabulary object detection model. The vision-language model can detect objects based on text prompt, and therefore, it can be extended to detect objects of undefined categories. In the development of deep learning models for real-world problems, constructing a large-scale labeled dataset is a time-consuming task and heavily relies on human effort. To reduce the labor-intensive workload, we utilized a large-scale public dataset as a source domain data and employed a domain adaptation method. Adversarial learning was conducted between a domain discriminator and feature extractor to reduce the gap between the distribution of feature vectors from the source domain and our target domain data. We collected a target domain dataset in a real-like environment and conducted experiments to demonstrate the effectiveness of the proposed method. In experiments, the domain adaptation method improved the AP50 metric from 38.88% to 78.59% for detecting objects within the range of 2m, and we achieved 81.7% of manipulation success rate.

Integration of WFST Language Model in Pre-trained Korean E2E ASR Model

  • Junseok Oh;Eunsoo Cho;Ji-Hwan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권6호
    • /
    • pp.1692-1705
    • /
    • 2024
  • In this paper, we present a method that integrates a Grammar Transducer as an external language model to enhance the accuracy of the pre-trained Korean End-to-end (E2E) Automatic Speech Recognition (ASR) model. The E2E ASR model utilizes the Connectionist Temporal Classification (CTC) loss function to derive hypothesis sentences from input audio. However, this method reveals a limitation inherent in the CTC approach, as it fails to capture language information from transcript data directly. To overcome this limitation, we propose a fusion approach that combines a clause-level n-gram language model, transformed into a Weighted Finite-State Transducer (WFST), with the E2E ASR model. This approach enhances the model's accuracy and allows for domain adaptation using just additional text data, avoiding the need for further intensive training of the extensive pre-trained ASR model. This is particularly advantageous for Korean, characterized as a low-resource language, which confronts a significant challenge due to limited resources of speech data and available ASR models. Initially, we validate the efficacy of training the n-gram model at the clause-level by contrasting its inference accuracy with that of the E2E ASR model when merged with language models trained on smaller lexical units. We then demonstrate that our approach achieves enhanced domain adaptation accuracy compared to Shallow Fusion, a previously devised method for merging an external language model with an E2E ASR model without necessitating additional training.