• 제목/요약/키워드: Domain Specific Language

검색결과 91건 처리시간 0.021초

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • 제28권4호
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

Exploring the feasibility of fine-tuning large-scale speech recognition models for domain-specific applications: A case study on Whisper model and KsponSpeech dataset

  • Jungwon Chang;Hosung Nam
    • Phonetics and Speech Sciences
    • /
    • 제15권3호
    • /
    • pp.83-88
    • /
    • 2023
  • This study investigates the fine-tuning of large-scale Automatic Speech Recognition (ASR) models, specifically OpenAI's Whisper model, for domain-specific applications using the KsponSpeech dataset. The primary research questions address the effectiveness of targeted lexical item emphasis during fine-tuning, its impact on domain-specific performance, and whether the fine-tuned model can maintain generalization capabilities across different languages and environments. Experiments were conducted using two fine-tuning datasets: Set A, a small subset emphasizing specific lexical items, and Set B, consisting of the entire KsponSpeech dataset. Results showed that fine-tuning with targeted lexical items increased recognition accuracy and improved domain-specific performance, with generalization capabilities maintained when fine-tuned with a smaller dataset. For noisier environments, a trade-off between specificity and generalization capabilities was observed. This study highlights the potential of fine-tuning using minimal domain-specific data to achieve satisfactory results, emphasizing the importance of balancing specialization and generalization for ASR models. Future research could explore different fine-tuning strategies and novel technologies such as prompting to further enhance large-scale ASR models' domain-specific performance.

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • 제29권3호
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • 제38권3호
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

An Extensible Programming Language for Plugin Features (플러그인 언어로 확장 가능한 프로그래밍 언어)

  • 최종명;유재우
    • Journal of KIISE:Software and Applications
    • /
    • 제31권5호
    • /
    • pp.632-642
    • /
    • 2004
  • The modern softwares have features of modularity and extensibility, and there are several researches on extensible programming languages and compilers. In this paper, we introduce Argos programming language, which provides the extensibility with the concept of plugin languages. A plugin language is used to define a method of a class, and the plugin language processors can be added and replaced dynamically The plugin languages may be used to support multiparadigm programming or domain specific languages.

A Plant Modeling Case Based on SysML Domain Specific Language (SysML DSL 기반 플랜트 모델링 케이스)

  • Lee, Taekyong;Cha, Jae-Min;Kim, Jun-Young;Shin, Junguk;Kim, Jinil;Yeom, Choongsub
    • Journal of the Korean Society of Systems Engineering
    • /
    • 제13권2호
    • /
    • pp.49-56
    • /
    • 2017
  • Implementation of Model-based Systems Engineering(MBSE) depends on a model supporting efficient communication among engineers from various domains. And SysML is designed to create models supporting MBSE but unfortunately, SysML itself is not practical enough to be used in real-world engineering projects. SysML is designed to express generic systems and requires specialized knowledge, so a model written in SysML is less capable of supporting communication between a systems engineer and a sub-system engineer. Domain Specific Languages(DSL) can be a great solution to overcome the weakness of the standard SysML. A SysML based DSL means a customized SysML for a specific engineering domain. Unfortunately, current researches on SysML Domain Specific Language(DSL) for the plant engineering industry are still on the early stage. So as the first step, we have developed our own SysML based Piping & Instrumentation Diagram (P&ID) creation environment and P&ID itself of a specific plant system, using a widely used SysML authoring tool called MagicDraw. P&ID is one of the most critical output during the plant design phase, which contains all information required for the plant construction phase. So a SysML based P&ID has a great potential to enhance the communication among plant engineers of various disciplines.

Building n Domain-Specific French-Korean Lexicon

  • N, Aesun-Yoo
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 한국언어정보학회 2002년도 Language, Information, and Computation Proceedings of The 16th Pacific Asia Conference
    • /
    • pp.465-474
    • /
    • 2002
  • Korean government has adopted the French TGV as a high-speed transportation system and the first service is scheduled at the end of 2003. TGV-relevant documents are consisted of huge volumes, of which over than 76% has been translated in English. A large part of the English version is, however, incomprehensible without referring to the original French version. The goal of this paper is to demonstrate how DiET 2.5, a lexicon builder, makes it possible to build with ease domain-specific terminology lexicon that may contain multimedia and multilingual data with multi-layered logical information. We believe our wok shows an important step in enlarging the language scope and the development of electronic lexica, and in providing the flexibility of defining any type of the DTD and the interconnectivity among collaborators. As an application of DiET 2.5, we would like to build a TGV-relevant lexicon in the near future.

  • PDF

BIOLOGY ORIENTED TARGET SPECIFIC LITERATURE MINING FOR GPCR PATHWAY EXTRACTION (GPCR 경로 추출을 위한 생물학 기반의 목적지향 텍스트 마이닝 시스템)

  • KIm, Eun-Ju;Jung, Seol-Kyoung;Yi, Eun-Ji;Lee, Gary-Geunbae;Park, Soo-Jun
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 한국생물정보시스템생물학회 2003년도 제2차 연례학술대회 발표논문집
    • /
    • pp.86-94
    • /
    • 2003
  • Electronically available biological literature has been accumulated exponentially in the course of time. So, researches on automatically acquiring knowledge from these tremendous data by text mining technology become more and more prosperous. However, most of the previous researches are technology oriented and are not well focused in practical extraction target, hence result in low performance and inconvenience for the bio-researchers to actually use. In this paper, we propose a more biology oriented target domain specific text mining system, that is, POSTECH bio-text mining system (POSBIOTM), for signal transduction pathway extraction, especially for G protein-coupled receptor (GPCR) pathway. To reflect more domain knowledge, we specify the concrete target for pathway extraction and define the minimal pathway domain ontology. Under this conceptual model, POSBIOTM extracts interactions and entities of pathways from the full biological articles using a machine learning oriented extraction method and visualizes the pathways using JDesigner module provided in the system biology workbench (SBW) [14]

  • PDF

A Multi-Strategic Concept-Spotting Approach for Robust Understanding of Spoken Korean

  • Lee, Chang-Ki;Eun, Ji-Hyun;Jeong, Min-Woo;Lee, Gary Geun-Bae;Hwang, Yi-Gyu;Jang, Myung-Gil
    • ETRI Journal
    • /
    • 제29권2호
    • /
    • pp.179-188
    • /
    • 2007
  • We propose a multi-strategic concept-spotting approach for robust spoken language understanding of conversational Korean in a hostile recognition environment such as in-car navigation and telebanking services. Our concept-spotting method adopts a partial semantic understanding strategy within a given specific domain since the method tries to directly extract predefined meaning representation slot values from spoken language inputs. In spite of partial understanding, we can efficiently acquire the necessary information to compose interesting applications because the meaning representation slots are properly designed for specific domain-oriented understanding tasks. We also propose a multi-strategic method based on this concept-spotting approach such as a voting method. We present experiments conducted to verify the feasibility of these methods using a variety of spoken Korean data.

  • PDF