• Title/Summary/Keyword: attention 기법

Search Result 990, Processing Time 0.023 seconds

Comparison of Pointer Network-based Dependency Parsers Depending on Attention Mechanisms (Attention Mechanism에 따른 포인터 네트워크 기반 의존 구문 분석 모델 비교)

  • Han, Mirae;Park, Seongsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.274-277
    • /
    • 2021
  • 의존 구문 분석은 문장 내 의존소와 지배소 사이의 관계를 예측하여 문장 구조를 분석하는 자연어처리 태스크이다. 최근의 딥러닝 기반 의존 구문 분석 연구는 주로 포인터 네트워크를 사용하는 방법으로 연구되고 있다. 포인터 네트워크는 내부적으로 사용하는 attention 기법에 따라 성능이 달라질 수 있다. 따라서 본 논문에서는 포인터 네트워크 모델에 적용되는 attention 기법들을 비교 분석하고, 한국어 의존 구문 분석 모델에 가장 효과적인 attention 기법을 선별한다. KLUE 데이터 셋을 사용한 실험 결과, UAS는 biaffine attention을 사용할 때 95.14%로 가장 높은 성능을 보였으며, LAS는 multi-head attention을 사용했을 때 92.85%로 가장 높은 성능을 보였다.

  • PDF

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.

Attentional mechanisms for video retargeting and 3D compressive processing (비디오 재설정 및 3D 압축처리를 위한 어텐션 메커니즘)

  • Hwang, Jae-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.943-950
    • /
    • 2011
  • In this paper, we presented an attention measurement method in 2D and 3D image/video to be applied for image and video retargeting and compressive processing. 2D attention is derived from the three main components, intensity, color, and orientation, while depth information is added for 3D attention. A rarity-based attention method is presented to obtain more interested region or objects. Displaced depth information is matched to attention probability in distorted stereo images and finally a stereo distortion predictor is designed by integrating low-level HVS responses. As results, more efficient attention scheme is developed from the conventional methods and performance is proved by applying for video retargeting.

Performance Analysis of Anomaly Area Segmentation in Industrial Products Based on Self-Attention Deep Learning Model (Self-Attention 딥러닝 모델 기반 산업 제품의 이상 영역 분할 성능 분석)

  • Changjoon Park;Namjung Kim;Junhwi Park;Jaehyun Lee;Jeonghwan Gwak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.45-46
    • /
    • 2024
  • 본 논문에서는 Self-Attention 기반 딥러닝 기법인 Dense Prediction Transformer(DPT) 모델을 MVTec Anomaly Detection(MVTec AD) 데이터셋에 적용하여 실제 산업 제품 이미지 내 이상 부분을 분할하는 연구를 진행하였다. DPT 모델의 적용을 통해 기존 Convolutional Neural Network(CNN) 기반 이상 탐지기법의 한계점인 지역적 Feature 추출 및 고정된 수용영역으로 인한 문제를 개선하였으며, 실제 산업 제품 데이터에서의 이상 분할 시 기존 주력 기법인 U-Net의 구조를 적용한 최고 성능의 모델보다 1.14%만큼의 성능 향상을 보임에 따라 Self-Attention 기반 딥러닝 기법의 적용이 산업 제품 이상 분할에 효과적임을 입증하였다.

  • PDF

Object Detection Model Using Attention Mechanism (주의 집중 기법을 활용한 객체 검출 모델)

  • Kim, Geun-Sik;Bae, Jung-Soo;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1581-1587
    • /
    • 2020
  • With the emergence of convolutional neural network in the field of machine learning, the model for solving image processing problems has seen rapid development. However, the computing resources required are also rising, making it difficult to learn from a typical environment. Attention mechanism is originally proposed to prevent the gradient vanishing problem of the recurrent neural network, but this can also be used in a direction favorable to learning of the convolutional neural network. In this paper, attention mechanism is applied to convolutional neural network, and the excellence of the proposed method is demonstrated through the comparison of learning time and performance difference at this time. The proposed model showed that both learning time and performance were superior in object detection based on YOLO compared to models without attention mechanism, and experimentally demonstrated that learning time could be significantly reduced. In addition, this is expected to increase accessibility to machine learning by end users.

Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition (음성 인식을 위한 sequence-to-sequence 심층 신경망의 이중 attention 기법)

  • Yook, Dongsuk;Lim, Dan;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.476-482
    • /
    • 2020
  • Sequence-to-sequence deep neural networks with attention mechanisms have shown superior performance across various domains, where the sizes of the input and the output sequences may differ. However, if the input sequences are much longer than the output sequences, and the characteristic of the input sequence changes within a single output token, the conventional attention mechanisms are inappropriate, because only a single context vector is used for each output token. In this paper, we propose a double-attention mechanism to handle this problem by using two context vectors that cover the left and the right parts of the input focus separately. The effectiveness of the proposed method is evaluated using speech recognition experiments on the TIMIT corpus.

Image-Based Skin Cancer Classification System Using Attention Layer (Attention layer를 활용한 이미지 기반 피부암 분류 시스템)

  • GyuWon Lee;SungHee Woo
    • Journal of Practical Engineering Education
    • /
    • v.16 no.1_spc
    • /
    • pp.59-64
    • /
    • 2024
  • As the aging population grows, the incidence of cancer is increasing. Skin cancer appears externally, but people often don't notice it or simply overlook it. As a result, if the early detection period is missed, the survival rate in the case of late stage cancer is only 7.5-11%. However, the disadvantage of diagnosing, serious skin cancer is that it requires a lot of time and money, such as a detailed examination and cell tests, rather than simple visual diagnosis. To overcome these challenges, we propose an Attention-based CNN model skin cancer classification system. If skin cancer can be detected early, it can be treated quickly, and the proposed system can greatly help the work of a specialist. To mitigate the problem of image data imbalance according to skin cancer type, this skin cancer classification model applies the Over Sampling, technique to data with a high distribution ratio, and adds a pre-learning model without an Attention layer. This model is then compared to the model without the Attention layer. We also plan to solve the data imbalance problem by strengthening data augmentation techniques for specific classes.

The Latest Trends in Attention Mechanisms and Their Application in Medical Imaging (어텐션 기법 및 의료 영상에의 적용에 관한 최신 동향)

  • Hyungseob Shin;Jeongryong Lee;Taejoon Eo;Yohan Jun;Sewon Kim;Dosik Hwang
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.6
    • /
    • pp.1305-1333
    • /
    • 2020
  • Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagnostic decisions are directly related to a patient's survival. In order to solve this, explainable artificial intelligence techniques are being widely studied, and an attention mechanism was developed as part of this approach. In this paper, attention techniques are divided into two types: post hoc attention, which aims to analyze a network that has already been trained, and trainable attention, which further improves network performance. Detailed comparisons of each method, examples of applications in medical imaging, and future perspectives will be covered.

Improving dam inflow prediction in LSTM-s2s model with luong attention (Attention 기법을 통한 LSTM-s2s 모델의 댐유입량 예측 개선)

  • Jonghyeok Lee;Yeonjoo Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.226-226
    • /
    • 2023
  • 하천유량, 댐유입량 등을 예측하기 위해 다양한 Long Short-Term Memory (LSTM) 방법들이 활발하게 적용 및 개발되고 있다. 최근 연구들은 s2s (sequence-to-sequence), Attention 기법 등을 통해 LSTM의 성능을 개선할 수 있음을 제시하고 있다. 이에 따라 본 연구에서는 LSTM-s2s와 LSTM-s2s에 attention까지 첨가한 모델을 구축하고, 시간 단위 자료를 사용하여 유입량 예측을 수행하여, 이의 실제 댐 운영에 모델들의 활용 가능성을 확인하고자 하였다. 소양강댐 유역을 대상으로 2013년부터 2020년까지의 유입량 시자료와 종관기상관측기온 및 강수량 데이터를 학습, 검증, 평가로 나누어 훈련한 후, 모델의 성능 평가를 진행하였다. 최적 시퀀스 길이를 결정하기 위해 R2, RRMSE, CC, NSE, 그리고 PBIAS을 사용하였다. 분석 결과, LSTM-s2s 모델보다 attention까지 첨가한 모델이 전반적으로 성능이 우수했으며, attention 첨가 모델이 첨두값 예측에서도 높은 정확도를 보였다. 두 모델 모두 첨두값 발생 동안 유량 패턴을 잘 반영하였지만 세밀한 시간 단위 변화량 패턴 모의에는 한계가 있었다. 시간 단위 예측의 한계에도 불구하고, LSTM-s2s에 attention까지 추가한 모델은 향후 댐유입량 예측에 활용될 수 있을 것으로 판단한다.

  • PDF

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.