• Title/Summary/Keyword: Attention Network

Search Result 1,472, Processing Time 0.03 seconds

Hybrid CTC-Attention Based End-to-End Speech Recognition Using Korean Grapheme Unit (한국어 자소 기반 Hybrid CTC-Attention End-to-End 음성 인식)

  • Park, Hosung;Lee, Donghyun;Lim, Minkyu;Kang, Yoseb;Oh, Junseok;Seo, Soonshin;Rim, Daniel;Kim, Ji-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.453-458
    • /
    • 2018
  • 본 논문은 한국어 자소를 인식 단위로 사용한 hybrid CTC-Attention 모델 기반 end-to-end speech recognition을 제안한다. End-to-end speech recognition은 기존에 사용된 DNN-HMM 기반 음향 모델과 N-gram 기반 언어 모델, WFST를 이용한 decoding network라는 여러 개의 모듈로 이루어진 과정을 하나의 DNN network를 통해 처리하는 방법을 말한다. 본 논문에서는 end-to-end 모델의 출력을 추정하기 위해 자소 단위의 출력구조를 사용한다. 자소 기반으로 네트워크를 구성하는 경우, 추정해야 하는 출력 파라미터의 개수가 11,172개에서 49개로 줄어들어 보다 효율적인 학습이 가능하다. 이를 구현하기 위해, end-to-end 학습에 주로 사용되는 DNN 네트워크 구조인 CTC와 Attention network 모델을 조합하여 end-to-end 모델을 구성하였다. 실험 결과, 음절 오류율 기준 10.05%의 성능을 보였다.

  • PDF

Attention Capsule Network for Aspect-Level Sentiment Classification

  • Deng, Yu;Lei, Hang;Li, Xiaoyu;Lin, Yiou;Cheng, Wangchi;Yang, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1275-1292
    • /
    • 2021
  • As a fine-grained classification problem, aspect-level sentiment classification predicts the sentiment polarity for different aspects in context. To address this issue, researchers have widely used attention mechanisms to abstract the relationship between context and aspects. Still, it is difficult to effectively obtain a more profound semantic representation, and the strong correlation between local context features and the aspect-based sentiment is rarely considered. In this paper, a hybrid attention capsule network for aspect-level sentiment classification (ABASCap) was proposed. In this model, the multi-head self-attention was improved, and a context mask mechanism based on adjustable context window was proposed, so as to effectively obtain the internal association between aspects and context. Moreover, the dynamic routing algorithm and activation function in capsule network were optimized to meet the task requirements. Finally, sufficient experiments were conducted on three benchmark datasets in different domains. Compared with other baseline models, ABASCap achieved better classification results, and outperformed the state-of-the-art methods in this task after incorporating pre-training BERT.

MALICIOUS URL RECOGNITION AND DETECTION USING ATTENTION-BASED CNN-LSTM

  • Peng, Yongfang;Tian, Shengwei;Yu, Long;Lv, Yalong;Wang, Ruijin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5580-5593
    • /
    • 2019
  • A malicious Uniform Resource Locator (URL) recognition and detection method based on the combination of Attention mechanism with Convolutional Neural Network and Long Short-Term Memory Network (Attention-Based CNN-LSTM), is proposed. Firstly, the WHOIS check method is used to extract and filter features, including the URL texture information, the URL string statistical information of attributes and the WHOIS information, and the features are subsequently encoded and pre-processed followed by inputting them to the constructed Convolutional Neural Network (CNN) convolution layer to extract local features. Secondly, in accordance with the weights from the Attention mechanism, the generated local features are input into the Long-Short Term Memory (LSTM) model, and subsequently pooled to calculate the global features of the URLs. Finally, the URLs are detected and classified by the SoftMax function using global features. The results demonstrate that compared with the existing methods, the Attention-based CNN-LSTM mechanism has higher accuracy for malicious URL detection.

Aspect-Based Sentiment Analysis with Position Embedding Interactive Attention Network

  • Xiang, Yan;Zhang, Jiqun;Zhang, Zhoubin;Yu, Zhengtao;Xian, Yantuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.614-627
    • /
    • 2022
  • Aspect-based sentiment analysis is to discover the sentiment polarity towards an aspect from user-generated natural language. So far, most of the methods only use the implicit position information of the aspect in the context, instead of directly utilizing the position relationship between the aspect and the sentiment terms. In fact, neighboring words of the aspect terms should be given more attention than other words in the context. This paper studies the influence of different position embedding methods on the sentimental polarities of given aspects, and proposes a position embedding interactive attention network based on a long short-term memory network. Firstly, it uses the position information of the context simultaneously in the input layer and the attention layer. Secondly, it mines the importance of different context words for the aspect with the interactive attention mechanism. Finally, it generates a valid representation of the aspect and the context for sentiment classification. The model which has been posed was evaluated on the datasets of the Semantic Evaluation 2014. Compared with other baseline models, the accuracy of our model increases by about 2% on the restaurant dataset and 1% on the laptop dataset.

Analysis of the Online Review Based on the Theme Using the Hierarchical Attention Network (Hierarchical Attention Network를 활용한 주제에 따른 온라인 고객 리뷰 분석 모델)

  • Jang, In Ho;Park, Ki Yeon;Lee, Zoon Ky
    • Journal of Information Technology Services
    • /
    • v.17 no.2
    • /
    • pp.165-177
    • /
    • 2018
  • Recently, online commerces are becoming more common due to factors such as mobile technology development and smart device dissemination, and online review has a big influence on potential buyer's purchase decision. This study presents a set of analytical methodologies for understanding the meaning of customer reviews of products in online transaction. Using techniques currently developed in deep learning are implemented Hierarchical Attention Network for analyze meaning in online reviews. By using these techniques, we could solve time consuming pre-data analysis time problem and multiple topic problems. To this end, this study analyzes customer reviews of laptops sold in domestic online shopping malls. Our result successfully demonstrates over 90% classification accuracy. Therefore, this study classified the unstructured text data in the semantic analysis and confirmed the practical application possibility of the review analysis process.

A Study on Various Attention for Improving Performance in Single Image Super Resolution (초고해상도 복원에서 성능 향상을 위한 다양한 Attention 연구)

  • Mun, Hwanbok;Yoon, Sang Min
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.898-910
    • /
    • 2020
  • Single image-based super-resolution has been studied for a long time in computer vision because of various applications. Various deep learning-based super-resolution algorithms are introduced recently to improve the performance by reducing side effects like blurring and staircase effects. Most deep learning-based approaches have focused on how to implement the network architecture, loss function, and training strategy to improve performance. Meanwhile, Several approaches using Attention Module, which emphasizes the extracted features, are introduced to enhance the performance of the network without any additional layer. Attention module emphasizes or scales the feature map for the purpose of the network from various perspectives. In this paper, we propose the various channel attention and spatial attention in single image-based super-resolution and analyze the results and performance according to the architecture of the attention module. Also, we explore that designing multi-attention module to emphasize features efficiently from various perspectives.

Change Attention based Dense Siamese Network for Remote Sensing Change Detection (원격 탐사 변화 탐지를 위한 변화 주목 기반의 덴스 샴 네트워크)

  • Hwang, Gisu;Lee, Woo-Ju;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.14-25
    • /
    • 2021
  • Change detection, which finds changes in remote sensing images of the same location captured at different times, is very important because it is used in various applications. However, registration errors, building displacement errors, and shadow errors cause false positives. To solve these problems, we propose a novle deep convolutional network called CADNet (Change Attention Dense Siamese Network). CADNet uses FPN (Feature Pyramid Network) to detect multi-scale changes, applies a Change Attention Module that attends to the changes, and uses DenseNet as a feature extractor to use feature maps that contain both low-level and high-level features for change detection. CADNet performance measured from the Precision, Recall, F1 side is 98.44%, 98.47%, 98.46% for WHU datasets and 90.72%, 91.89%, 91.30% for LEVIR-CD datasets. The results of this experiment show that CADNet can offer better performance than any other traditional change detection method.

SAR Recognition of Target Variants Using Channel Attention Network without Dimensionality Reduction (차원축소 없는 채널집중 네트워크를 이용한 SAR 변형표적 식별)

  • Park, Ji-Hoon;Choi, Yeo-Reum;Chae, Dae-Young;Lim, Ho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.3
    • /
    • pp.219-230
    • /
    • 2022
  • In implementing a robust automatic target recognition(ATR) system with synthetic aperture radar(SAR) imagery, one of the most important issues is accurate classification of target variants, which are the same targets with different serial numbers, configurations and versions, etc. In this paper, a deep learning network with channel attention modules is proposed to cope with the recognition problem for target variants based on the previous research findings that the channel attention mechanism selectively emphasizes the useful features for target recognition. Different from other existing attention methods, this paper employs the channel attention modules without dimensionality reduction along the channel direction from which direct correspondence between feature map channels can be preserved and the features valuable for recognizing SAR target variants can be effectively derived. Experiments with the public benchmark dataset demonstrate that the proposed scheme is superior to the network with other existing channel attention modules.

Two-dimensional attention-based multi-input LSTM for time series prediction

  • Kim, Eun Been;Park, Jung Hoon;Lee, Yung-Seop;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.39-57
    • /
    • 2021
  • Time series prediction is an area of great interest to many people. Algorithms for time series prediction are widely used in many fields such as stock price, temperature, energy and weather forecast; in addtion, classical models as well as recurrent neural networks (RNNs) have been actively developed. After introducing the attention mechanism to neural network models, many new models with improved performance have been developed; in addition, models using attention twice have also recently been proposed, resulting in further performance improvements. In this paper, we consider time series prediction by introducing attention twice to an RNN model. The proposed model is a method that introduces H-attention and T-attention for output value and time step information to select useful information. We conduct experiments on stock price, temperature and energy data and confirm that the proposed model outperforms existing models.

Deep Learning-based Super Resolution Method Using Combination of Channel Attention and Spatial Attention (채널 강조와 공간 강조의 결합을 이용한 딥 러닝 기반의 초해상도 방법)

  • Lee, Dong-Woo;Lee, Sang-Hun;Han, Hyun Ho
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.15-22
    • /
    • 2020
  • In this paper, we proposed a deep learning based super-resolution method that combines Channel Attention and Spatial Attention feature enhancement methods. It is important to restore high-frequency components, such as texture and features, that have large changes in surrounding pixels during super-resolution processing. We proposed a super-resolution method using feature enhancement that combines Channel Attention and Spatial Attention. The existing CNN (Convolutional Neural Network) based super-resolution method has difficulty in deep network learning and lacks emphasis on high frequency components, resulting in blurry contours and distortion. In order to solve the problem, we used an emphasis block that combines Channel Attention and Spatial Attention to which Skip Connection was applied, and a Residual Block. The emphasized feature map extracted by the method was extended through Sub-pixel Convolution to obtain the super resolution. As a result, about PSNR improved by 5%, SSIM improved by 3% compared with the conventional SRCNN, and by comparison with VDSR, about PSNR improved by 2% and SSIM improved by 1%.