• 제목/요약/키워드: pretrained transformer encoder

검색결과 5건 처리시간 0.016초

Korean automatic spacing using pretrained transformer encoder and analysis

  • Hwang, Taewook;Jung, Sangkeun;Roh, Yoon-Hyung
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1049-1057
    • /
    • 2021
  • Automatic spacing in Korean is used to correct spacing units in a given input sentence. The demand for automatic spacing has been increasing owing to frequent incorrect spacing in recent media, such as the Internet and mobile networks. Therefore, herein, we propose a transformer encoder that reads a sentence bidirectionally and can be pretrained using an out-of-task corpus. Notably, our model exhibited the highest character accuracy (98.42%) among the existing automatic spacing models for Korean. We experimentally validated the effectiveness of bidirectional encoding and pretraining for automatic spacing in Korean. Moreover, we conclude that pretraining is more important than fine-tuning and data size.

조음장애 아동의 언어학습을 위한 인공지능 애플리케이션 UX/UI 연구 (Artificial intelligence application UX/UI study for language learning of children with articulation disorder)

  • 양은미;박대우
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.174-176
    • /
    • 2022
  • 본 논문에서는인공지능(AI; Artificial Intelligence)알고리즘을 활용한 조음 장애 아동들의 '개인화된 맞춤형 학습' 모바일 애플리케이션을 제시한다. 조음과 관련된 빅데이터(Big Data)를 수집-정제-가공한 데이터 셋(Data Set)으로 학습자의 조음 상황 및 정도를 분석, 판단, 예측한다. 특히, 인공지능 활용 시 기존 애플리케이션에 비해 어떻게 개선되고 고도화할수 있는지를 UX/UI(GUI) 측면에서 바라보고 프로토타입 모델을 설계해 보았다. 지금까지 시각적 경험에 많이 치중해 있었다면, 이제는 데이터를 어떻게 가공하여 사용자에게 UX/UI(GUI) 경험을 제공할 수 있는지가 중요한 시점이다. 제시한 모바일 애플리케이션의 UX/UI(GUI)는 딥러닝(Deep Learning)의 CRNN(Convolution Recurrent Neural Network)과 Auto Encoder GPT-3 (Generative Pretrained Transformer)를 활용하여 학습자의 조음 정도와 상황에 맞게 제공하고자 하였다. 인공지능 알고리즘의 활용은 조음 장애 아동들에게 완성도 높은 학습환경을 제공하여 학습효과를 높일 수 있를 것이다. '개인화된 맞춤형 학습'으로 조음의 완성도를 높여서, 대화에 대한 두려움이나 불편함을 갖지 않길 바란다.

  • PDF

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • 제43권2호
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.