• 제목/요약/키워드: Transformer Models

검색결과 146건 처리시간 0.025초

부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템 (A Vision Transformer Based Recommender System Using Side Information)

  • 권유진;최민석;조윤호
    • 지능정보연구
    • /
    • 제28권3호
    • /
    • pp.119-137
    • /
    • 2022
  • 최근 추천 시스템 연구에서는 사용자와 아이템 간 상호 작용을 보다 잘 표현하고자 다양한 딥 러닝 모델을 적용하고 있다. ONCF(Outer product-based Neural Collaborative Filtering)는 사용자와 아이템의 행렬을 외적하고 합성곱 신경망을 거치는 구조로 2차원 상호작용 맵을 제작해 사용자와 아이템 간의 상호 작용을 더욱 잘 포착하고자 한 대표적인 딥러닝 기반 추천시스템이다. 하지만 합성곱 신경망을 이용하는 ONCF는 학습 데이터에 나타나지 않은 분포를 갖는 데이터의 경우 예측성능이 떨어지는 귀납적 편향을 가지는 한계가 있다. 본 연구에서는 먼저 NCF구조에 Transformer에 기반한 ViT(Vision Transformer)를 도입한 방법론을 제안한다. ViT는 NLP분야에서 주로 사용되던 트랜스포머를 이미지 분류에 적용하여 좋은 성과를 거둔 방법으로 귀납적 편향이 합성곱 신경망보다 약해 처음 보는 분포에도 robust한 특징이 있다. 다음으로, ONCF는 사용자와 아이템에 대한 단일 잠재 벡터를 사용하였지만 본 연구에서는 모델이 더욱 다채로운 표현을 학습하고 앙상블 효과도 얻기 위해 잠재 벡터를 여러 개 사용하여 채널을 구성한다. 마지막으로 ONCF와 달리 부가 정보(side information)를 추천에 반영할 수 있는 아키텍처를 제시한다. 단순한 입력 결합 방식을 활용하여 신경망에 부가 정보를 반영하는 기존 연구와 달리 본 연구에서는 독립적인 보조 분류기(auxiliary classifier)를 도입하여 추천 시스템에 부가정보를 보다 효율적으로 반영할 수 있도록 하였다. 결론적으로 본 논문에서는 ViT 의 적용, 임베딩 벡터의 채널화, 부가정보 분류기의 도입을 적용한 새로운 딥러닝 모델을 제안하였으며 실험 결과 ONCF보다 높은 성능을 보였다.

TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발 (TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction)

  • 김성수;배준호;이주현;정희주;김희웅
    • 지능정보연구
    • /
    • 제29권3호
    • /
    • pp.419-437
    • /
    • 2023
  • 국내 씬파일러(Thin Filer)의 수가 1200만명을 넘어서며, 금융 업계에서 씬파일러의 신용을 정확히 평가하여 우량고객을 선별해 대출을 공급하는 시도가 많아지고 있다. 특히, 차주의 신용정보에 존재하는 비선형성을 반영하여 채무불이행을 예측하기 위해서 다양한 머신러닝 알고리즘을 활용한 연구가 진행되고 있다. 그 중 그래프 신경망 구조(Graph Neural Network)는 일반적인 신용정보 외에 대출자 간의 네트워크 정보를 반영할 수 있다는 점에서 데이터가 부족한 씬파일러의 채무 불이행 예측에서 주목할 만하다. 그러나, 그래프 신경망을 활용한 기존의 연구들은 신용정보에 존재하는 다양한 범주형 변수를 적절히 처리하지 못했다는 한계가 있었다. 이에 본 연구는 범주형 변수의 맥락적 정보를 추출할 수 있는 트랜스포머 메커니즘(Transformer mechanism)과 대출자 간 네트워크 정보를 반영할 수 있는 그래프 합성곱 신경망(Graph Convolutional Network)를 결합하여 효과적으로 씬파일러의 채무 불이행 예측이 가능한 TeGCN (Transformer embedded Graph Convolutional Network)를 제안한다. TeGCN는 일반 대출자 데이터셋과 씬파일러 데이터셋에 대하여 모두 베이스 라인 모델 대비 높은 성능을 보였으며, 특히 씬파일러 채무 불이행 예측에 우수한 성능을 달성했다. 본 연구는 범주형 변수가 많은 신용정보와 데이터가 부족한 씬파일러의 특성에 적합한 모델 구조를 결합하여 높은 채무 불이행 예측 성능을 달성했다는 시사점이 있다. 이는 씬파일러의 금융소외문제를 해결하고 금융업계에서 씬파일러를 대상으로 추가적인 수익을 창출하는데 기여할 수 있을 것이다.

워드 임베딩 클러스터링을 활용한 리뷰 다중문서 요약기법 (Multi-Document Summarization Method of Reviews Using Word Embedding Clustering)

  • 이필원;황윤영;최종석;신용태
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제10권11호
    • /
    • pp.535-540
    • /
    • 2021
  • 다중문서는 하나의 주제가 아닌 다양한 주제로 구성된 문서를 의미하며 대표적인 예로 온라인 리뷰가 있다. 온라인 리뷰는 정보량이 방대하기 때문에 요약하기 위한 여러 시도가 있었다. 그러나 기존의 요약모델을 통해 리뷰를 일괄적으로 요약할 경우 리뷰를 구성하고 있는 다양한 주제가 소실되는 문제가 발생한다. 따라서 본 논문에서는 주제의 손실을 최소화하며 리뷰를 요약하기 위한 기법을 제시한다. 제안하는 기법은 전처리, 중요도 평가, BERT를 활용한 임베딩 치환, 임베딩 클러스터링과 같은 과정을 통해 리뷰를 분류한다. 그리고 분류된 문장은 학습된 Transformer 요약모델을 통해 최종 요약을 생성한다. 제안하는 모델의 성능 평가는 기존의 요약모델인 seq2seq 모델과 ROUGE 스코어와 코사인 유사도를 평가하여 비교하였으며 기존의 요약모델과 비교하여 뛰어난 성능의 요약을 수행하였다.

Generating Radiology Reports via Multi-feature Optimization Transformer

  • Rui Wang;Rong Hua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권10호
    • /
    • pp.2768-2787
    • /
    • 2023
  • As an important research direction of the application of computer science in the medical field, the automatic generation technology of radiology report has attracted wide attention in the academic community. Because the proportion of normal regions in radiology images is much larger than that of abnormal regions, words describing diseases are often masked by other words, resulting in significant feature loss during the calculation process, which affects the quality of generated reports. In addition, the huge difference between visual features and semantic features causes traditional multi-modal fusion method to fail to generate long narrative structures consisting of multiple sentences, which are required for medical reports. To address these challenges, we propose a multi-feature optimization Transformer (MFOT) for generating radiology reports. In detail, a multi-dimensional mapping attention (MDMA) module is designed to encode the visual grid features from different dimensions to reduce the loss of primary features in the encoding process; a feature pre-fusion (FP) module is constructed to enhance the interaction ability between multi-modal features, so as to generate a reasonably structured radiology report; a detail enhanced attention (DEA) module is proposed to enhance the extraction and utilization of key features and reduce the loss of key features. In conclusion, we evaluate the performance of our proposed model against prevailing mainstream models by utilizing widely-recognized radiology report datasets, namely IU X-Ray and MIMIC-CXR. The experimental outcomes demonstrate that our model achieves SOTA performance on both datasets, compared with the base model, the average improvement of six key indicators is 19.9% and 18.0% respectively. These findings substantiate the efficacy of our model in the domain of automated radiology report generation.

Partial Discharge Pattern Recognition of Cast Resin Current Transformers Using Radial Basis Function Neural Network

  • Chang, Wen-Yeau
    • Journal of Electrical Engineering and Technology
    • /
    • 제9권1호
    • /
    • pp.293-300
    • /
    • 2014
  • This paper proposes a novel pattern recognition approach based on the radial basis function (RBF) neural network for identifying insulation defects of high-voltage electrical apparatus arising from partial discharge (PD). Pattern recognition of PD is used for identifying defects causing the PD, such as internal discharge, external discharge, corona, etc. This information is vital for estimating the harmfulness of the discharge in the insulation. Since an insulation defect, such as one resulting from PD, would have a corresponding particular pattern, pattern recognition of PD is significant means to discriminate insulation conditions of high-voltage electrical apparatus. To verify the proposed approach, experiments were conducted to demonstrate the field-test PD pattern recognition of cast resin current transformer (CRCT) models. These tests used artificial defects created in order to produce the common PD activities of CRCTs by using feature vectors of field-test PD patterns. The significant features are extracted by using nonlinear principal component analysis (NLPCA) method. The experimental data are found to be in close agreement with the recognized data. The test results show that the proposed approach is efficient and reliable.

Precise Modeling and Adaptive Feed-Forward Decoupling of Unified Power Quality Conditioners

  • Wang, Yingpin;Obwoya, Rubangakene Thomas;Li, Zhibo;Li, Gongjie;Qu, Yi;Shi, Zeyu;Zhang, Feng;Xie, Yunxiang
    • Journal of Power Electronics
    • /
    • 제19권2호
    • /
    • pp.519-528
    • /
    • 2019
  • The unified power quality conditioner (UPQC) is an effective custom power device that is used at the point of common coupling to protect loads from voltage and current-related PQ issues. Currently, most researchers have studied series unit and parallel unit models and an idealized transformer model. However, the interactions of the series and parallel converters in AC-link are difficult to analyze. This study utilizes an equivalent transformer model to accomplish an electric connection of series and parallel converters in the AC-link and to establishes a precise unified mathematical model of the UPQC. The strong coupling interactions of series and parallel units are analyzed, and they show a remarkable dependence on the excitation impedance of transformers. Afterward, a feed-forward decoupling method based on a unified model that contains the uncertainty components of the load impedance is applied. Thus, this study presents an adaptive method to estimate load impedance. Furthermore, simulation and experimental results verify the accuracy of the proposed modeling and decoupling algorithm.

전이 학습 및 SHAP 분석을 활용한 트랜스포머 기반 감정 분류 모델 (A Transformer-Based Emotion Classification Model Using Transfer Learning and SHAP Analysis )

  • 임수빈 ;이병천 ;전인수 ;문지훈
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.706-708
    • /
    • 2023
  • In this study, we embark on a journey to uncover the essence of emotions by exploring the depths of transfer learning on three pre-trained transformer models. Our quest to classify five emotions culminates in discovering the KLUE (Korean Language Understanding Evaluation)-BERT (Bidirectional Encoder Representations from Transformers) model, which is the most exceptional among its peers. Our analysis of F1 scores attests to its superior learning and generalization abilities on the experimental data. To delve deeper into the mystery behind its success, we employ the powerful SHAP (Shapley Additive Explanations) method to unravel the intricacies of the KLUE-BERT model. The findings of our investigation are presented with a mesmerizing text plot visualization, which serves as a window into the model's soul. This approach enables us to grasp the impact of individual tokens on emotion classification and provides irrefutable, visually appealing evidence to support the predictions of the KLUE-BERT model.

Partial Discharge Process and Characteristics of Oil-Paper Insulation under Pulsating DC Voltage

  • Bao, Lianwei;Li, Jian;Zhang, Jing;Jiang, Tianyan;Li, Xudong
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권2호
    • /
    • pp.436-444
    • /
    • 2016
  • Oil-paper insulation of valve-side windings in converter transformers withstand electrical stresses combining with AC, DC and strong harmonic components. This paper presents the physical mechanisms and experimental researches on partial discharge (PD) of oil-paper insulation at pulsating DC voltage. Theoretical analysis showed that the phase-resolved distributions of PDs generated from different insulated models varied as the increase of the applied voltages following a certain rule. Four artificial insulation defect models were designed to generate PD signals at pulsating DC voltages. Theoretical statements and experimental results show that the PD pulses first appear at the maximum value of the applied pulsating DC voltage, and the resolved PD phase distribution became wider as the applied voltage increased. The PD phase-resolved distributions generated from the different discharge models are also different in the phase-resolved distributions and development progress. It implies that the theoretical analysis is suitable for interpretation of PD at pulsating DC voltage.

Breakdown Characteristics and Survival Probability of Turn-to- Turn Models for a HTS Transformer

  • Cheon H.G.;Baek S.M.;Seong K.C.;Kim H.J.;Kim S.H.
    • 한국초전도ㆍ저온공학회논문지
    • /
    • 제7권2호
    • /
    • pp.21-26
    • /
    • 2005
  • Breakdown characteristics and survival probability of turn-to-turn models were investigated under ac and impulse voltage at 77K. For experiments, two test electrode models were fabricated: One is point contact model and the other is surface contact model. Both are made of copper wrapped by O.025mm thick polyimide film(Kapton). The experimental results were analyzed statistically using Weibull distribution in order to examine the wrapping number effects on voltage-time characteristics under ac voltage as well as under impulse voltage in LN$_{2}$. Also survival analysis were performed according to the Kaplan-Meier method. The breakdown voltages of surface contact model are lower than that of point contact model, because the contact area of surface contact model is wider than that of point contact model. Besides, the shape parameter of point contact model is a little bit larger than that of surface contact model. The time to breakdown t$_{50}$ is decreased as the applied voltage is increased, and the lifetime indices slightly are increased as the number of layers is increased. According to the increasing applied voltage and decreasing wrapping number, the survival probability is increased.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • 제43권2호
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.