• Title/Summary/Keyword: Large language models

Search Result 164, Processing Time 0.033 seconds

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

A Study on Pseudo N-gram Language Models for Speech Recognition (음성인식을 위한 의사(疑似) N-gram 언어모델에 관한 연구)

  • 오세진;황철준;김범국;정호열;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.16-23
    • /
    • 2001
  • In this paper, we propose the pseudo n-gram language models for speech recognition with middle size vocabulary compared to large vocabulary speech recognition using the statistical n-gram language models. The proposed method is that it is very simple method, which has the standard structure of ARPA and set the word probability arbitrary. The first, the 1-gram sets the word occurrence probability 1 (log likelihood is 0.0). The second, the 2-gram also sets the word occurrence probability 1, which can only connect the word start symbol and WORD, WORD and the word end symbol . Finally, the 3-gram also sets the ward occurrence probability 1, which can only connect the word start symbol , WORD and the word end symbol . To verify the effectiveness of the proposed method, the word recognition experiments are carried out. The preliminary experimental results (off-line) show that the word accuracy has average 97.7% for 452 words uttered by 3 male speakers. The on-line word recognition results show that the word accuracy has average 92.5% for 20 words uttered by 20 male speakers about stock name of 1,500 words. Through experiments, we have verified the effectiveness of the pseudo n-gram language modes for speech recognition.

  • PDF

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.

Performance Comparison and Error Analysis of Korean Bio-medical Named Entity Recognition (한국어 생의학 개체명 인식 성능 비교와 오류 분석)

  • Jae-Hong Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.701-708
    • /
    • 2024
  • The advent of transformer architectures in deep learning has been a major breakthrough in natural language processing research. Object name recognition is a branch of natural language processing and is an important research area for tasks such as information retrieval. It is also important in the biomedical field, but the lack of Korean biomedical corpora for training has limited the development of Korean clinical research using AI. In this study, we built a new biomedical corpus for Korean biomedical entity name recognition and selected language models pre-trained on a large Korean corpus for transfer learning. We compared the name recognition performance of the selected language models by F1-score and the recognition rate by tag, and analyzed the errors. In terms of recognition performance, KlueRoBERTa showed relatively good performance. The error analysis of the tagging process shows that the recognition performance of Disease is excellent, but Body and Treatment are relatively low. This is due to over-segmentation and under-segmentation that fails to properly categorize entity names based on context, and it will be necessary to build a more precise morphological analyzer and a rich lexicon to compensate for the incorrect tagging.

Models for Scheduling Individual Jet Aircraft

  • Yang, Hong-Suk
    • International Journal of Quality Innovation
    • /
    • v.10 no.2
    • /
    • pp.19-27
    • /
    • 2009
  • This paper considers the short term fleet scheduling problem as described by Keskinocak and Tayur (1998). Fleet scheduling may directly affect the service quality of fractional jet aircraft business. The contributions of this paper are two: (i) we show how their model is easily implemented in a standard modeling language, LINGO, and (ii) an alternate formulation is given which is expected to perform better on large, difficult problems.

A Study on the Evaluation of LLM's Gameplay Capabilities in Interactive Text-Based Games (대화형 텍스트 기반 게임에서 LLM의 게임플레이 기능 평가에 관한 연구)

  • Dongcheul Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.87-94
    • /
    • 2024
  • We investigated the feasibility of utilizing Large Language Models (LLMs) to perform text-based games without training on game data in advance. We adopted ChatGPT-3.5 and its state-of-the-art, ChatGPT-4, as the systems that implemented LLM. In addition, we added the persistent memory feature proposed in this paper to ChatGPT-4 to create three game player agents. We used Zork, one of the most famous text-based games, to see if the agents could navigate through complex locations, gather information, and solve puzzles. The results showed that the agent with persistent memory had the widest range of exploration and the best score among the three agents. However, all three agents were limited in solving puzzles, indicating that LLM is vulnerable to problems that require multi-level reasoning. Nevertheless, the proposed agent was still able to visit 37.3% of the total locations and collect all the items in the locations it visited, demonstrating the potential of LLM.

Exploring the feasibility of developing an education tool for pattern identification using a large language model: focusing on the case of a simulated patient with fatigue symptom and dual deficiency of the heart-spleen pattern (거대언어모델을 활용한 변증 교육도구 개발 가능성 탐색: 피로주증의 심비양허형 모의환자에 대한 사례구축을 중심으로)

  • Won-Yung Lee;Sang Yun Han;Seungho Lee
    • Herbal Formula Science
    • /
    • v.32 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Objective : This study aims to assess the potential of utilizing large language models in pattern identification education by developing a simulated patient with fatigue and dual deficiency of the heart-spleen pattern. Methods : A simulated patient dataset was constructed using the clinical practice examination module provided by the National Institute for Korean Medicine Development. The dataset was divided into patient characteristics, sample questions, and responses, and utilized to design the system, assistant, and user prompts, respectively. A web-based interface was developed using the Django framework and WebSocket. Results : We developed a simulated fatigue patient representing dual deficiency of the heart-spleen pattern through prompt engineering. To make practical tools, we further implemented web-based interfaces for the examinee's and evaluator's roles. The interface for examinees allows one to examine the simulated patient and provides access to a personalized number for future access. In addition, the interface for evaluators included a page that provided an overview of each examinees' chat history and evaluation criteria in real-time. Conclusion : This study is the first development of an educational tool integrated with a large language model for pattern identification education, which is expected to be widely applied to Korean medicine education.

A derivation of real-time simulation model on the large-structure driving system and its application to the analysis of system interface characteristics (대형구조물 구동계통 실시간 시뮬레이션 모델 유도 및 연동 특성 분석에의 응용)

  • Kim, Jae-Hun;Choi, Young-Ho;Yoo, Woong-Jae;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.3 no.1
    • /
    • pp.13-25
    • /
    • 2000
  • A simulation model is developed to analyze the large-structure driving system and its integrated behavior in the whole weapon system. It models every component in the driving system such as mechanical and electrical characteristics, and it is programmed by simulation language in a way which strongly reflects the system's real time dynamics and reduces computation time as well. A useful parameter identification method is proposed, and it is tuned on the given physical system. The model is validated through comparing to real test, and it is applied to analysis and prediction of integrated system functions relating to the fire control system.

  • PDF

LLM-based chatbot system to improve worker efficiency and prevent safety incidents (작업자의 업무 능률 향상과 안전 사고 방지를 위한 LLM 기반 챗봇 시스템)

  • Doohwan Kim;Yohan Han;Inhyuk Jeong;Yeongseok Hwnag;Jinju Park;Nahyeon Lee;Yujin Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.321-324
    • /
    • 2024
  • 본 논문에서는 LLM(Large Language Models) 기반의 STT 결합 챗봇 시스템을 제안한다. 제조업 공장에서 안전 교육의 부족과 외국인 근로자의 증가는 안전을 중시하는 작업 환경에서 새로운 도전과제로 부상하고 있다. 이에 본 연구는 언어 모델과 음성 인식(Speech-to-Text, STT) 기술을 활용한 혁신적인 챗봇 시스템을 통해 이러한 문제를 해결하고자 한다. 제안된 시스템은 작업자들이 장비 사용 매뉴얼 및 안전 지침을 쉽게 접근하도록 지원하며, 비상 상황에서 신속하고 정확한 대응을 가능하게 한다. 연구 과정에서 LLM은 작업자의 의도를 파악하고, STT 기술은 음성 명령을 효과적으로 처리한다. 실험 결과, 이 시스템은 작업자의 업무 효율성을 증대시키고 언어 장벽을 해소하는데 효과적임이 확인되었다. 본 연구는 제조업 현장에서 작업자의 안전과 업무 효율성 향상에 기여할 것으로 기대된다.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.