• 제목/요약/키워드: transformer-based models

검색결과 89건 처리시간 0.026초

CNN 모델과 Transformer 조합을 통한 토지피복 분류 정확도 개선방안 검토 (Assessing Techniques for Advancing Land Cover Classification Accuracy through CNN and Transformer Model Integration)

  • 심우담;이정수
    • 한국지리정보학회지
    • /
    • 제27권1호
    • /
    • pp.115-127
    • /
    • 2024
  • 본 연구는 Transformer 모듈을 기반으로 다양한 구조의 모델을 구성하고, 토지피복 분류를 수행하여 Transformer 모듈의 활용방안 검토를 목적으로 하였다. 토지피복 분류를 위한 딥러닝 모델은 CNN 구조를 가진 Unet 모델을 베이스 모델로 선정하였으며, 모델의 인코더 및 디코더 부분을 Transformer 모듈과 조합하여 총 4가지 딥러닝 모델을 구축하였다. 딥러닝 모델의 학습과정에서 일반화 성능 평가를 위해 같은 학습조건으로 10회 반복하여 학습을 진행하였다. 딥러닝 모델의 분류 정확도 평가결과, 모델의 인코더 및 디코더 구조 모두 Transformer 모듈을 활용한 D모델이 전체 정확도 평균 약 89.4%, Kappa 평균 약 73.2%로 가장 높은 정확도를 보였다. 학습 소요시간 측면에서는 CNN 기반의 모델이 가장 효율적이었으나 Transformer 기반의 모델을 활용할 경우, 분류 정확도가 Kappa 기준 평균 0.5% 개선되었다. 차후, CNN 모델과 Transformer의 결합과정에서 하이퍼파라미터 조절과 이미지 패치사이즈 조절 등 다양한 변수들을 고려하여 모델을 고도화 할 필요가 있다고 판단된다. 토지피복 분류과정에서 모든 모델이 공통적으로 발생한 문제점은 소규모 객체들의 탐지가 어려운 점이었다. 이러한 오분류 현상의 개선을 위해서는 고해상도 입력자료의 활용방안 검토와 함께 지형 정보 및 질감 정보를 포함한 다차원적 데이터 통합이 필요할 것으로 판단된다.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

An Ensemble Model for Credit Default Discrimination: Incorporating BERT-based NLP and Transformer

  • Sophot Ky;Ju-Hong Lee
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.624-626
    • /
    • 2023
  • Credit scoring is a technique used by financial institutions to assess the creditworthiness of potential borrowers. This involves evaluating a borrower's credit history to predict the likelihood of defaulting on a loan. This paper presents an ensemble of two Transformer based models within a framework for discriminating the default risk of loan applications in the field of credit scoring. The first model is FinBERT, a pretrained NLP model to analyze sentiment of financial text. The second model is FT-Transformer, a simple adaptation of the Transformer architecture for the tabular domain. Both models are trained on the same underlying data set, with the only difference being the representation of the data. This multi-modal approach allows us to leverage the unique capabilities of each model and potentially uncover insights that may not be apparent when using a single model alone. We compare our model with two famous ensemble-based models, Random Forest and Extreme Gradient Boosting.

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • 제44권5호
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

Partial Discharge Localization Based on Detailed Models of Transformer and Wavelet Transform Techniques

  • Hassan Hosseini, Seyed Mohammad;Rezaei Baravati, Peyman
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권3호
    • /
    • pp.1093-1101
    • /
    • 2015
  • Partial Discharge (PD) is a physical phenomenon, which causes defects and damages to the insulation. This phenomenon is regarded as the most important source of fault and defect in power transformers. Therefore, methods of high speed and precision are considered of special importance for the maintenance of transformers in localization of the origin of partial discharge. In this paper, the transformer winding is first modeled in a transient state by using RLC ladder network and multiconductor transmission line (MTL) models. The parameters of the two models were calculated by Ansoft Maxwell software, and the simulations were performed by Matlab software. Then, the PD pulses were applied to the models with different widths of pulses. With regard to the fact that the signals received after the application of PD had a variable frequency nature over time, and based on the wavelet transform and signal energy, a new method was presented for the localization of PD. Ultimately; the mentioned method was implemented on a 20 kV winding distribution transformer. Then, the performances of the models used in this paper, including RLC and MTL models, were compared in different frequency bands for the correct distinction of partial discharge location.

High-Speed Transformer for Panoptic Segmentation

  • Baek, Jong-Hyeon;Kim, Dae-Hyun;Lee, Hee-Kyung;Choo, Hyon-Gon;Koh, Yeong Jun
    • 방송공학회논문지
    • /
    • 제27권7호
    • /
    • pp.1011-1020
    • /
    • 2022
  • Recent high-performance panoptic segmentation models are based on transformer architectures. However, transformer-based panoptic segmentation methods are basically slower than convolution-based methods, since the attention mechanism in the transformer requires quadratic complexity w.r.t. image resolution. Also, sine and cosine computation for positional embedding in the transformer also yields a bottleneck for computation time. To address these problems, we adopt three modules to speed up the inference runtime of the transformer-based panoptic segmentation. First, we perform channel-level reduction using depth-wise separable convolution for inputs of the transformer decoder. Second, we replace sine and cosine-based positional encoding with convolution operations, called conv-embedding. We also apply a separable self-attention to the transformer encoder to lower quadratic complexity to linear one for numbers of image pixels. As result, the proposed model achieves 44% faster frame per second than baseline on ADE20K panoptic validation dataset, when we use all three modules.

객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구 (A Survey on Vision Transformers for Object Detection Task)

  • 하정민;이현종;엄정민;이재구
    • 대한임베디드공학회논문지
    • /
    • 제17권6호
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

누설 인덕턴스를 포함한 DAB 컨버터용 고주파 변압기의 머신러닝 활용한 최적 설계 (Machine-Learning Based Optimal Design of A Large-leakage High-frequency Transformer for DAB Converters)

  • 노은총;김길동;이승환
    • 전력전자학회논문지
    • /
    • 제27권6호
    • /
    • pp.507-514
    • /
    • 2022
  • This study proposes an optimal design process for a high-frequency transformer that has a large leakage inductance for dual-active-bridge converters. Notably, conventional design processes have large errors in designing leakage transformers because mathematically modeling the leakage inductance of such transformers is difficult. In this work, the geometric parameters of a shell-type transformer are identified, and finite element analysis(FEA) simulation is performed to determine the magnetization inductance, leakage inductance, and copper loss of various shapes of shell-type transformers. Regression models for magnetization and leakage inductances and copper loss are established using the simulation results and the machine learning technique. In addition, to improve the regression models' performance, the regression models are tuned by adding featured parameters that consider the physical characteristics of the transformer. With the regression models, optimal high-frequency transformer designs and the Pareto front (in terms of volume and loss) are determined using NSGA-II. In the Pareto front, a desirable optimal design is selected and verified by FEA simulation and experimentation. The simulated and measured leakage inductances of the selected design match well, and this result shows the validity of the proposed design process.

RNN과 트랜스포머 기반 모델들의 한국어 리뷰 감성분류 비교 (Comparison of Sentiment Classification Performance of for RNN and Transformer-Based Models on Korean Reviews)

  • 이재홍
    • 한국전자통신학회논문지
    • /
    • 제18권4호
    • /
    • pp.693-700
    • /
    • 2023
  • 텍스트 문서에서 주관적인 의견과 감정을 긍정 혹은 부정으로 분류하고 식별하는 자연어 처리의 한 분야인 감성 분석은 고객 선호도 분석을 통해 다양한 홍보 및 서비스에 활용할 수 있다. 이를 위해 최근 머신러닝과 딥러닝의 다양한 기법을 활용한 연구가 진행되어 왔다. 본 연구에서는 기존의 RNN 기반 모델들과 최근 트랜스포머 기반 언어 모델들을 활용하여 영화, 상품 및 게임 리뷰를 대상으로 감성 분석의 정확도를 비교 분석하여 최적의 언어 모델을 제안하고자 한다. 실험 결과 한국어 말뭉치로 사전 학습된 모델들 중 LMKor-BERT와 GPT-3가 상대적으로 좋은 정확도를 보여주었다.

ProphetNet 모델을 활용한 시계열 데이터의 열화 패턴 기반 Health Index 연구 (A Study on the Health Index Based on Degradation Patterns in Time Series Data Using ProphetNet Model)

  • 원선주;김용수
    • 산업경영시스템학회지
    • /
    • 제46권3호
    • /
    • pp.123-138
    • /
    • 2023
  • The Fourth Industrial Revolution and sensor technology have led to increased utilization of sensor data. In our modern society, data complexity is rising, and the extraction of valuable information has become crucial with the rapid changes in information technology (IT). Recurrent neural networks (RNN) and long short-term memory (LSTM) models have shown remarkable performance in natural language processing (NLP) and time series prediction. Consequently, there is a strong expectation that models excelling in NLP will also excel in time series prediction. However, current research on Transformer models for time series prediction remains limited. Traditional RNN and LSTM models have demonstrated superior performance compared to Transformers in big data analysis. Nevertheless, with continuous advancements in Transformer models, such as GPT-2 (Generative Pre-trained Transformer 2) and ProphetNet, they have gained attention in the field of time series prediction. This study aims to evaluate the classification performance and interval prediction of remaining useful life (RUL) using an advanced Transformer model. The performance of each model will be utilized to establish a health index (HI) for cutting blades, enabling real-time monitoring of machine health. The results are expected to provide valuable insights for machine monitoring, evaluation, and management, confirming the effectiveness of advanced Transformer models in time series analysis when applied in industrial settings.