• 제목/요약/키워드: Transformer-based Architecture

검색결과 24건 처리시간 0.023초

Transformer를 활용한 인공신경망의 경량화 알고리즘 및 하드웨어 가속 기술 동향 (Trends in Lightweight Neural Network Algorithms and Hardware Acceleration Technologies for Transformer-based Deep Neural Networks)

  • 김혜지;여준기
    • 전자통신동향분석
    • /
    • 제38권5호
    • /
    • pp.12-22
    • /
    • 2023
  • The development of neural networks is evolving towards the adoption of transformer structures with attention modules. Hence, active research focused on extending the concept of lightweight neural network algorithms and hardware acceleration is being conducted for the transition from conventional convolutional neural networks to transformer-based networks. We present a survey of state-of-the-art research on lightweight neural network algorithms and hardware architectures to reduce memory usage and accelerate both inference and training. To describe the corresponding trends, we review recent studies on token pruning, quantization, and architecture tuning for the vision transformer. In addition, we present a hardware architecture that incorporates lightweight algorithms into artificial intelligence processors to accelerate processing.

Time-Series Forecasting Based on Multi-Layer Attention Architecture

  • Na Wang;Xianglian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권1호
    • /
    • pp.1-14
    • /
    • 2024
  • Time-series forecasting is extensively used in the actual world. Recent research has shown that Transformers with a self-attention mechanism at their core exhibit better performance when dealing with such problems. However, most of the existing Transformer models used for time series prediction use the traditional encoder-decoder architecture, which is complex and leads to low model processing efficiency, thus limiting the ability to mine deep time dependencies by increasing model depth. Secondly, the secondary computational complexity of the self-attention mechanism also increases computational overhead and reduces processing efficiency. To address these issues, the paper designs an efficient multi-layer attention-based time-series forecasting model. This model has the following characteristics: (i) It abandons the traditional encoder-decoder based Transformer architecture and constructs a time series prediction model based on multi-layer attention mechanism, improving the model's ability to mine deep time dependencies. (ii) A cross attention module based on cross attention mechanism was designed to enhance information exchange between historical and predictive sequences. (iii) Applying a recently proposed sparse attention mechanism to our model reduces computational overhead and improves processing efficiency. Experiments on multiple datasets have shown that our model can significantly increase the performance of current advanced Transformer methods in time series forecasting, including LogTrans, Reformer, and Informer.

Incorporating BERT-based NLP and Transformer for An Ensemble Model and its Application to Personal Credit Prediction

  • Sophot Ky;Ju-Hong Lee;Kwangtek Na
    • 스마트미디어저널
    • /
    • 제13권4호
    • /
    • pp.9-15
    • /
    • 2024
  • Tree-based algorithms have been the dominant methods used build a prediction model for tabular data. This also includes personal credit data. However, they are limited to compatibility with categorical and numerical data only, and also do not capture information of the relationship between other features. In this work, we proposed an ensemble model using the Transformer architecture that includes text features and harness the self-attention mechanism to tackle the feature relationships limitation. We describe a text formatter module, that converts the original tabular data into sentence data that is fed into FinBERT along with other text features. Furthermore, we employed FT-Transformer that train with the original tabular data. We evaluate this multi-modal approach with two popular tree-based algorithms known as, Random Forest and Extreme Gradient Boosting, XGBoost and TabTransformer. Our proposed method shows superior Default Recall, F1 score and AUC results across two public data sets. Our results are significant for financial institutions to reduce the risk of financial loss regarding defaulters.

트랜스포머 기반 MUM-T 상황인식 기술: 에이전트 상태 예측 (Transformer-Based MUM-T Situation Awareness: Agent Status Prediction)

  • 백재욱;전성우;김광용;이창은
    • 로봇학회논문지
    • /
    • 제18권4호
    • /
    • pp.436-443
    • /
    • 2023
  • With the advancement of robot intelligence, the concept of man and unmanned teaming (MUM-T) has garnered considerable attention in military research. In this paper, we present a transformer-based architecture for predicting the health status of agents, with the help of multi-head attention mechanism to effectively capture the dynamic interaction between friendly and enemy forces. To this end, we first introduce a framework for generating a dataset of battlefield situations. These situations are simulated on a virtual simulator, allowing for a wide range of scenarios without any restrictions on the number of agents, their missions, or their actions. Then, we define the crucial elements for identifying the battlefield, with a specific emphasis on agents' status. The battlefield data is fed into the transformer architecture, with classification headers on top of the transformer encoding layers to categorize health status of agent. We conduct ablation tests to assess the significance of various factors in determining agents' health status in battlefield scenarios. We conduct 3-Fold corss validation and the experimental results demonstrate that our model achieves a prediction accuracy of over 98%. In addition, the performance of our model are compared with that of other models such as convolutional neural network (CNN) and multi layer perceptron (MLP), and the results establish the superiority of our model.

An Ensemble Model for Credit Default Discrimination: Incorporating BERT-based NLP and Transformer

  • Sophot Ky;Ju-Hong Lee
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.624-626
    • /
    • 2023
  • Credit scoring is a technique used by financial institutions to assess the creditworthiness of potential borrowers. This involves evaluating a borrower's credit history to predict the likelihood of defaulting on a loan. This paper presents an ensemble of two Transformer based models within a framework for discriminating the default risk of loan applications in the field of credit scoring. The first model is FinBERT, a pretrained NLP model to analyze sentiment of financial text. The second model is FT-Transformer, a simple adaptation of the Transformer architecture for the tabular domain. Both models are trained on the same underlying data set, with the only difference being the representation of the data. This multi-modal approach allows us to leverage the unique capabilities of each model and potentially uncover insights that may not be apparent when using a single model alone. We compare our model with two famous ensemble-based models, Random Forest and Extreme Gradient Boosting.

Transformer 네트워크를 이용한 음성신호 변환 (Voice-to-voice conversion using transformer network)

  • 김준우;정호영
    • 말소리와 음성과학
    • /
    • 제12권3호
    • /
    • pp.55-63
    • /
    • 2020
  • 음성 변환은 다양한 음성 처리 응용에 적용될 수 있으며, 음성 인식을 위한 학습 데이터 증강에도 중요한 역할을 할 수 있다. 기존의 방법은 음성 합성을 이용하여 음성 변환을 수행하는 구조를 사용하여 멜 필터뱅크가 중요한 파라미터로 활용된다. 멜 필터뱅크는 뉴럴 네트워크 학습의 편리성 및 빠른 연산 속도를 제공하지만, 자연스러운 음성파형을 생성하기 위해서는 보코더를 필요로 한다. 또한, 이 방법은 음성 인식을 위한 다양한 데이터를 얻는데 효과적이지 않다. 이 문제를 해결하기 위해 본 논문은 원형 스펙트럼을 사용하여 음성 신호 자체의 변환을 시도하였고, 어텐션 메커니즘으로 스펙트럼 성분 사이의 관계를 효율적으로 찾아내어 변환을 위한 자질을 학습할 수 있는 transformer 네트워크 기반 딥러닝 구조를 제안하였다. 영어 숫자로 구성된 TIDIGITS 데이터를 사용하여 개별 숫자 변환 모델을 학습하였고, 연속 숫자 음성 변환 디코더를 통한 결과를 평가하였다. 30명의 청취 평가자를 모집하여 변환된 음성의 자연성과 유사성에 대해 평가를 진행하였고, 자연성 3.52±0.22 및 유사성 3.89±0.19 품질의 성능을 얻었다.

수어 번역을 위한 3차원 컨볼루션 비전 트랜스포머 (Three-Dimensional Convolutional Vision Transformer for Sign Language Translation)

  • 성호렬;조현중
    • 정보처리학회 논문지
    • /
    • 제13권3호
    • /
    • pp.140-147
    • /
    • 2024
  • 한국에서 청각장애인은 지체장애인에 이어 두 번째로 많은 등록 장애인 그룹이다. 하지만 수어 기계 번역은 시장 성장성이 작고, 엄밀하게 주석처리가 된 데이터 세트가 부족해 발전 속도가 더디다. 한편, 최근 컴퓨터 비전과 패턴 인식 분야에서 트랜스포머를 사용한 모델이 많이 제안되고 있는데, 트랜스포머를 이용한 모델은 동작 인식, 비디오 분류 등의 분야에서 높은 성능을 보여오고 있다. 이에 따라 수어 기계 번역 분야에서도 트랜스포머를 도입하여 성능을 개선하려는 시도들이 제안되고 있다. 본 논문에서는 수어 번역을 위한 인식 부분을 트랜스포머와 3D-CNN을 융합한 3D-CvT를 제안한다. 또, PHOENIX-Wether-2014T [1]를 이용한 실험을 통해 제안 모델은 기존 모델보다 적은 연산량으로도 비슷한 번역 성능을 보이는 효율적인 모델임을 실험적으로 증명하였다.

트랜스포머 기반 판별 특징 학습 비전을 통한 얼굴 조작 감지 (Facial Manipulation Detection with Transformer-based Discriminative Features Learning Vision)

  • ;김민수;최필주;이석환;;권기룡
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.540-542
    • /
    • 2023
  • Due to the serious issues posed by facial manipulation technologies, many researchers are becoming increasingly interested in the identification of face forgeries. The majority of existing face forgery detection methods leverage powerful data adaptation ability of neural network to derive distinguishing traits. These deep learning-based detection methods frequently treat the detection of fake faces as a binary classification problem and employ softmax loss to track CNN network training. However, acquired traits observed by softmax loss are insufficient for discriminating. To get over these limitations, in this study, we introduce a novel discriminative feature learning based on Vision Transformer architecture. Additionally, a separation-center loss is created to simply compress intra-class variation of original faces while enhancing inter-class differences in the embedding space.

탄소섬유강화플라스틱 재료 레저선박의 구조강도 평가를 위한 시험설비 구축과 운용에 관한 연구 (The Development of Structural Test Facility for the Strength Assessment of CFRP Marine Leisure Boat)

  • 정한구;장양;염덕준
    • 대한조선학회논문집
    • /
    • 제54권4호
    • /
    • pp.312-320
    • /
    • 2017
  • This paper deals with the development of structural test facility for the strength assessment of marine leisure boat built from carbon fiber reinforced plastics (CFRP) materials. The structural test facility consists of test jig, load application and control system, and data acquisition system. Test jig, and load application and control system are designed to accommodate various size and short span to depth ratios of single skin, top-hat stiffened and sandwich constructions in plated structural format such as square and rectangular shapes. A lateral pressure load, typical and important applied load condition to the plates of the hull structure for marine leisure boat, is simulated by employing a number of hydraulic cylinders operated automatically and manually. To examine and operate the structural test facility, five carbon/epoxy based FRP square plates having the test section area of $1m^2$, which are part of CFRP marine leisure boat hull, are prepared and they are subjected to monotonically increasing lateral pressure loads. In the test preparation, considering the symmetry of the plates geometry, various strain gauges and linear variable displacement transformer are used in conjunction with data acquisition system utilizing LabVIEW. From the test observation, the responses of the CFRP hull structure of marine leisure boat are understood by obtaining load to deflection and strain to load curves.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권12호
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.