• Title/Summary/Keyword: language models

Search Result 872, Processing Time 0.028 seconds

Working Memory and Language Disorders : Literature Review (작업기억과 언어발달장애: 문헌연구)

  • Kim Soo-Jin;Kim Jung-Yeon;Lee Hye-Ran
    • MALSORI
    • /
    • no.51
    • /
    • pp.39-55
    • /
    • 2004
  • Working memory is the term used to refer to the mental workplace in which information can be temporarily stored and manipulated during complex everyday activities such as understanding language. The studies on language and working memory are based on Baddeley's phonological working memory and Daneman and Carpenter's functional working memory. This article reviews two working memory models and the studies on language and working memory based on each model. These are described in the implication of working memory in language development and specific language impairment-evaluation and treatment.

  • PDF

Generative Adversarial Networks: A Literature Review

  • Cheng, Jieren;Yang, Yue;Tang, Xiangyan;Xiong, Naixue;Zhang, Yuan;Lei, Feifei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4625-4647
    • /
    • 2020
  • The Generative Adversarial Networks, as one of the most creative deep learning models in recent years, has achieved great success in computer vision and natural language processing. It uses the game theory to generate the best sample in generator and discriminator. Recently, many deep learning models have been applied to the security field. Along with the idea of "generative" and "adversarial", researchers are trying to apply Generative Adversarial Networks to the security field. This paper presents the development of Generative Adversarial Networks. We review traditional generation models and typical Generative Adversarial Networks models, analyze the application of their models in natural language processing and computer vision. To emphasize that Generative Adversarial Networks models are feasible to be used in security, we separately review the contributions that their defenses in information security, cyber security and artificial intelligence security. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.

Automatic Extraction of References for Research Reports using Deep Learning Language Model (딥러닝 언어 모델을 이용한 연구보고서의 참고문헌 자동추출 연구)

  • Yukyung Han;Wonsuk Choi;Minchul Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.115-135
    • /
    • 2023
  • The purpose of this study is to assess the effectiveness of using deep learning language models to extract references automatically and create a reference database for research reports in an efficient manner. Unlike academic journals, research reports present difficulties in automatically extracting references due to variations in formatting across institutions. In this study, we addressed this issue by introducing the task of separating references from non-reference phrases, in addition to the commonly used metadata extraction task for reference extraction. The study employed datasets that included various types of references, such as those from research reports of a particular institution, academic journals, and a combination of academic journal references and non-reference texts. Two deep learning language models, namely RoBERTa+CRF and ChatGPT, were compared to evaluate their performance in automatic extraction. They were used to extract metadata, categorize data types, and separate original text. The research findings showed that the deep learning language models were highly effective, achieving maximum F1-scores of 95.41% for metadata extraction and 98.91% for categorization of data types and separation of the original text. These results provide valuable insights into the use of deep learning language models and different types of datasets for constructing reference databases for research reports including both reference and non-reference texts.

Optimizing Language Models through Dataset-Specific Post-Training: A Focus on Financial Sentiment Analysis (데이터 세트별 Post-Training을 통한 언어 모델 최적화 연구: 금융 감성 분석을 중심으로)

  • Hui Do Jung;Jae Heon Kim;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2024
  • This research investigates training methods for large language models to accurately identify sentiments and comprehend information about increasing and decreasing fluctuations in the financial domain. The main goal is to identify suitable datasets that enable these models to effectively understand expressions related to financial increases and decreases. For this purpose, we selected sentences from Wall Street Journal that included relevant financial terms and sentences generated by GPT-3.5-turbo-1106 for post-training. We assessed the impact of these datasets on language model performance using Financial PhraseBank, a benchmark dataset for financial sentiment analysis. Our findings demonstrate that post-training FinBERT, a model specialized in finance, outperformed the similarly post-trained BERT, a general domain model. Moreover, post-training with actual financial news proved to be more effective than using generated sentences, though in scenarios requiring higher generalization, models trained on generated sentences performed better. This suggests that aligning the model's domain with the domain of the area intended for improvement and choosing the right dataset are crucial for enhancing a language model's understanding and sentiment prediction accuracy. These results offer a methodology for optimizing language model performance in financial sentiment analysis tasks and suggest future research directions for more nuanced language understanding and sentiment analysis in finance. This research provides valuable insights not only for the financial sector but also for language model training across various domains.

Integration of WFST Language Model in Pre-trained Korean E2E ASR Model

  • Junseok Oh;Eunsoo Cho;Ji-Hwan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1692-1705
    • /
    • 2024
  • In this paper, we present a method that integrates a Grammar Transducer as an external language model to enhance the accuracy of the pre-trained Korean End-to-end (E2E) Automatic Speech Recognition (ASR) model. The E2E ASR model utilizes the Connectionist Temporal Classification (CTC) loss function to derive hypothesis sentences from input audio. However, this method reveals a limitation inherent in the CTC approach, as it fails to capture language information from transcript data directly. To overcome this limitation, we propose a fusion approach that combines a clause-level n-gram language model, transformed into a Weighted Finite-State Transducer (WFST), with the E2E ASR model. This approach enhances the model's accuracy and allows for domain adaptation using just additional text data, avoiding the need for further intensive training of the extensive pre-trained ASR model. This is particularly advantageous for Korean, characterized as a low-resource language, which confronts a significant challenge due to limited resources of speech data and available ASR models. Initially, we validate the efficacy of training the n-gram model at the clause-level by contrasting its inference accuracy with that of the E2E ASR model when merged with language models trained on smaller lexical units. We then demonstrate that our approach achieves enhanced domain adaptation accuracy compared to Shallow Fusion, a previously devised method for merging an external language model with an E2E ASR model without necessitating additional training.

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification (자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정)

  • Young-Nam Kim
    • 대한상한금궤의학회지
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Technical Trends in On-device Small Language Model Technology Development (온디바이스 소형언어모델 기술개발 동향)

  • G. Kim;K. Yoon;R. Kim;J. H. Ryu;S. C. Kim
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.4
    • /
    • pp.82-92
    • /
    • 2024
  • This paper introduces the technological development trends in on-device SLMs (Small Language Models). Large Language Models (LLMs) based on the transformer model have gained global attention with the emergence of ChatGPT, providing detailed and sophisticated responses across various knowledge domains, thereby increasing their impact across society. While major global tech companies are continuously announcing new LLMs or enhancing their capabilities, the development of SLMs, which are lightweight versions of LLMs, is intensely progressing. SLMs have the advantage of being able to run as on-device AI on smartphones or edge devices with limited memory and computing resources, enabling their application in various fields from a commercialization perspective. This paper examines the technical features for developing SLMs, lightweight technologies, semiconductor technology development trends for on-device AI, and potential applications across various industries.

Query Context Information-Based Translation Models for Korean-Japanese Cross-Language Informal ion Retrieval (한-일 교차언어검색에서의 질의 문맥 정보를 이용한 대역어 변환 확률 모델)

  • Lee, Gyu-Chan;Kang, In-Su;Na, Seung-Hoon;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2005.10a
    • /
    • pp.97-104
    • /
    • 2005
  • 교차언어 검색 과정에서는 질의나 문서의 언어를 일치시키기 위한 변환 과정이 필수적이며, 이런 변환 과정에서 어휘의 중의성으로 인해 하나의 어휘에 대응하는 다수의 대역어가 생성됨으로써 사용자의 정보 욕구를 왜곡시켜 검색의 성능을 저하시킬 수 있다. 본 논문에서는 어휘 중의성 문제를 해결하기 위해서 질의의 문맥 정보를 이용하여 변환 질의의 확률을 구함으로써 중의성을 해소하는 방식을 제시하고, 질의의 길이, 중의도, 중의성을 가진 어휘의 비율 등에 따라서 성능이 어떻게 변하는지 비교함으로써 이 방법의 장점과 단점을 분석한다. 또한 현재의 단점을 보완하기 위한 차후 연구 방향을 제시한다.

  • PDF

Relation Extraction using Generative Language Models (생성형 언어모델을 이용한 관계추출)

  • Jeong Heo;Jong-Hun Shin;Soo-Jong Lim;Oh-Woog Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.707-710
    • /
    • 2023
  • 관계추출은 문장 내 두 개체 간의 의미적 관계를 추론하는 자연어분석 태스크이다. 딥러닝의 발전과 더불어 관계추출은 BERT 계열의 이해형 언어모델을 이용하였다. 그러나, ChatGPT의 혁신적인 등장과 함께, GPT계열의 생성형 언어모델에 대한 연구가 활발해졌다. 본 논문에서는 소규모의 생성형 언어모델(Kebyt5)을 이용하여 관계추출 성능개선을 위한 프롬프트 구성 및 생각의 사슬(CoT) 학습 방법을 제안한다. 실험결과 Kebyt5-large 모델에서 CoT 학습을 수행하였을 경우, Klue-RoBERTa-base 모델보다 3.05%의 성능개선이 있었다.

  • PDF