• 제목/요약/키워드: representations from transformers (bert)

검색결과 37건 처리시간 0.026초

추가 사전학습 기반 지식 전이를 통한 국가 R&D 전문 언어모델 구축 (Building Specialized Language Model for National R&D through Knowledge Transfer Based on Further Pre-training)

  • 유은지;서수민;김남규
    • 지식경영연구
    • /
    • 제22권3호
    • /
    • pp.91-106
    • /
    • 2021
  • 최근 딥러닝 기술이 빠르게 발전함에 따라 국가 R&D 분야의 방대한 텍스트 문서를 다양한 관점에서 분석하기 위한 수요가 급증하고 있다. 특히 대용량의 말뭉치에 대해 사전학습을 수행한 BERT(Bidirectional Encoder Representations from Transformers) 언어모델의 활용에 대한 관심이 높아지고 있다. 하지만 국가 R&D와 같이 고도로 전문화된 분야에서 높은 빈도로 사용되는 전문어는 기본 BERT에서 충분히 학습이 이루어지지 않은 경우가 많으며, 이는 BERT를 통한 전문 분야 문서 이해의 한계로 지적되고 있다. 따라서 본 연구에서는 최근 활발하게 연구되고 있는 추가 사전학습을 활용하여, 기본 BERT에 국가 R&D 분야 지식을 전이한 R&D KoBERT 언어모델을 구축하는 방안을 제시한다. 또한 제안 모델의 성능 평가를 위해 보건의료, 정보통신 분야의 과제 약 116,000건을 대상으로 분류 분석을 수행한 결과, 제안 모델이 순수한 KoBERT 모델에 비해 정확도 측면에서 더 높은 성능을 나타내는 것을 확인하였다.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • 제43권2호
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • 제18권2호
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.

Fine-tuning BERT Models for Keyphrase Extraction in Scientific Articles

  • Lim, Yeonsoo;Seo, Deokjin;Jung, Yuchul
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.45-56
    • /
    • 2020
  • Despite extensive research, performance enhancement of keyphrase (KP) extraction remains a challenging problem in modern informatics. Recently, deep learning-based supervised approaches have exhibited state-of-the-art accuracies with respect to this problem, and several of the previously proposed methods utilize Bidirectional Encoder Representations from Transformers (BERT)-based language models. However, few studies have investigated the effective application of BERT-based fine-tuning techniques to the problem of KP extraction. In this paper, we consider the aforementioned problem in the context of scientific articles by investigating the fine-tuning characteristics of two distinct BERT models - BERT (i.e., base BERT model by Google) and SciBERT (i.e., a BERT model trained on scientific text). Three different datasets (WWW, KDD, and Inspec) comprising data obtained from the computer science domain are used to compare the results obtained by fine-tuning BERT and SciBERT in terms of KP extraction.

CR-M-SpanBERT: Multiple embedding-based DNN coreference resolution using self-attention SpanBERT

  • Joon-young Jung
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.35-47
    • /
    • 2024
  • This study introduces CR-M-SpanBERT, a coreference resolution (CR) model that utilizes multiple embedding-based span bidirectional encoder representations from transformers, for antecedent recognition in natural language (NL) text. Information extraction studies aimed to extract knowledge from NL text autonomously and cost-effectively. However, the extracted information may not represent knowledge accurately owing to the presence of ambiguous entities. Therefore, we propose a CR model that identifies mentions referring to the same entity in NL text. In the case of CR, it is necessary to understand both the syntax and semantics of the NL text simultaneously. Therefore, multiple embeddings are generated for CR, which can include syntactic and semantic information for each word. We evaluate the effectiveness of CR-M-SpanBERT by comparing it to a model that uses SpanBERT as the language model in CR studies. The results demonstrate that our proposed deep neural network model achieves high-recognition accuracy for extracting antecedents from NL text. Additionally, it requires fewer epochs to achieve an average F1 accuracy greater than 75% compared with the conventional SpanBERT approach.

Deep Learning-based Target Masking Scheme for Understanding Meaning of Newly Coined Words

  • Nam, Gun-Min;Kim, Namgyu
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권10호
    • /
    • pp.157-165
    • /
    • 2021
  • 최근 대량의 텍스트 분석을 위해 딥 러닝(Deep Learning)을 활용하는 연구들이 활발히 수행되고 있으며, 특히 대량의 텍스트에 대한 학습 결과를 특정 도메인 텍스트의 분석에 적용하는 사전 학습 언어 모델(Pre-trained Language Model)이 주목받고 있다. 다양한 사전 학습 언어 모델 중 BERT(Bidirectional Encoder Representations from Transformers) 기반 모델이 가장 널리 활용되고 있으며, 최근에는 BERT의 MLM(Masked Language Model)을 활용한 추가 사전 학습(Further Pre-training)을 통해 분석 성능을 향상시키기 위한 방안이 모색되고 있다. 하지만 전통적인 MLM 방식은 신조어와 같이 새로운 단어가 포함된 문장의 의미를 충분히 명확하게 파악하기 어렵다는 한계를 갖는다. 이에 본 연구에서는 기존의 MLM을 보완하여 신조어에 대해서만 집중적으로 마스킹을 수행하는 신조어 표적 마스킹(NTM: Newly Coined Words Target Masking)을 새롭게 제안한다. 제안 방법론을 적용하여 포털 'N'사의 영화 리뷰 약 70만 건을 분석한 결과, 제안하는 신조어 표적 마스킹이 기존의 무작위 마스킹에 비해 감성 분석의 정확도 측면에서 우수한 성능을 보였다.

GPT-3와 KoBERT를 활용한 감정 분석 기반 AI 챗봇 시스템 (Emotion Analysis-Based AI Chatbot System Using GPT-3 and KoBERT)

  • 김준현;문미경
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2023년도 제68차 하계학술대회논문집 31권2호
    • /
    • pp.367-368
    • /
    • 2023
  • 최근 챗봇 시스템은 급격한 발전과 함께 사용자와 자연스러운 대화를 할 수 있는 인공지능 기술의 필요성이 대두되고 있다. 기존의 챗봇 시스템은 대화 상황을 충분히 이해하지 못하거나, 학습된 데이터를 벗어나는 문장에 대한 일관성 있는 응답을 제공하지 못하는 한계가 있다. 본 논문에서는 GPT-3와 KoBERT를 활용하여 사용자의 감정 상태를 파악하고 해당 감정을 고려한 일관성 있는 대화를 제공하는 감정 분석 기반 챗봇 시스템을 제안한다. 이를 바탕으로 긍정적인 대화를 이어 나가는데 초점을 두어 자연스러운 대화가 가능할 것으로 기대된다.

  • PDF

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

BERT 를 활용한 한국어 개체명 인식기 (Korean Named Entity Recognition using BERT)

  • 황석현;신석환;최동근;김성현;김재은
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.820-822
    • /
    • 2019
  • 개체명이란, 문서에서 특정한 의미를 가지고 있는 단어나 어구를 뜻하는 말로 사람, 기관명, 지역명, 날짜, 시간 등이 있으며 이 개체명을 찾아서 해당하는 의미의 범주를 결정하는 것을 개체명 인식이라고 한다. 본 논문에서는 BERT(Bidirectional Encoder Representations from Transformers) 활용한 한국어 개체명 인식기를 제안한다. 제안하는 모델은 기 학습된 BERT 모델을 활용함으로써 성능을 극대화하여, 최종 F1-Score 는 90.62 를 달성하였고, Bi-LSTM-Attention-CRF 모델에 비해 매우 뛰어난 결과를 보였다.