• Title/Summary/Keyword: multi-head attention

Search Result 30, Processing Time 0.023 seconds

Improving Transformer with Dynamic Convolution and Shortcut for Video-Text Retrieval

  • Liu, Zhi;Cai, Jincen;Zhang, Mengmeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2407-2424
    • /
    • 2022
  • Recently, Transformer has made great progress in video retrieval tasks due to its high representation capability. For the structure of a Transformer, the cascaded self-attention modules are capable of capturing long-distance feature dependencies. However, the local feature details are likely to have deteriorated. In addition, increasing the depth of the structure is likely to produce learning bias in the learned features. In this paper, an improved Transformer structure named TransDCS (Transformer with Dynamic Convolution and Shortcut) is proposed. A Multi-head Conv-Self-Attention module is introduced to model the local dependencies and improve the efficiency of local features extraction. Meanwhile, the augmented shortcuts module based on a dual identity matrix is applied to enhance the conduction of input features, and mitigate the learning bias. The proposed model is tested on MSRVTT, LSMDC and Activity-Net benchmarks, and it surpasses all previous solutions for the video-text retrieval task. For example, on the LSMDC benchmark, a gain of about 2.3% MdR and 6.1% MnR is obtained over recently proposed multimodal-based methods.

Personality Consistent Dialogue Generation in No-Persona-Aware System (페르소나 대화모델에서 일관된 발화 생성을 위한 연구)

  • Moon, Hyeonseok;Lee, Chanhee;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.572-577
    • /
    • 2020
  • 일관된 발화를 생성함에 있어 인격데이터(persona)의 도입을 이용한 연구가 활발히 진행되고 있지만, 한국어 데이터셋의 부재와 데이터셋 생성의 어려움이 문제점으로 지적된다. 본 연구에서는 인격데이터를 포함하지 않고 일관된 발화를 생성할 수 있는 방법으로 다중 대화 시스템에서 사전 학습된 자연어 추론(NLI) 모델을 도입하는 방법을 제안한다. 자연어 추론 모델을 이용한 관계 분석을 통해 과거 대화 내용 중 발화 생성에 이용할 대화를 선택하고, 자가 참조 모델(self-attention)과 다중 어텐션(multi-head attention) 모델을 활용하여 과거 대화 내용을 반영한 발화를 생성한다. 일관성 있는 발화 생성을 위해 기존 NLI데이터셋으로 수행할 수 있는 새로운 학습모델 nMLM을 제안하고, 이 방법이 일관성 있는 발화를 만드는데 기여할 수 있는 방법에 대해 연구한다.

  • PDF

Sentiment Analysis of News Based on Generative AI and Real Estate Price Prediction: Application of LSTM and VAR Models (생성 AI기반 뉴스 감성 분석과 부동산 가격 예측: LSTM과 VAR모델의 적용)

  • Sua Kim;Mi Ju Kwon;Hyon Hee Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.5
    • /
    • pp.209-216
    • /
    • 2024
  • Real estate market prices are determined by various factors, including macroeconomic variables, as well as the influence of a variety of unstructured text data such as news articles and social media. News articles are a crucial factor in predicting real estate transaction prices as they reflect the economic sentiment of the public. This study utilizes sentiment analysis on news articles to generate a News Sentiment Index score, which is then seamlessly integrated into a real estate price prediction model. To calculate the sentiment index, the content of the articles is first summarized. Then, using AI, the summaries are categorized into positive, negative, and neutral sentiments, and a total score is calculated. This score is then applied to the real estate price prediction model. The models used for real estate price prediction include the Multi-head attention LSTM model and the Vector Auto Regression model. The LSTM prediction model, without applying the News Sentiment Index (NSI), showed Root Mean Square Error (RMSE) values of 0.60, 0.872, and 1.117 for the 1-month, 2-month, and 3-month forecasts, respectively. With the NSI applied, the RMSE values were reduced to 0.40, 0.724, and 1.03 for the same forecast periods. Similarly, the VAR prediction model without the NSI showed RMSE values of 1.6484, 0.6254, and 0.9220 for the 1-month, 2-month, and 3-month forecasts, respectively, while applying the NSI led to RMSE values of 1.1315, 0.3413, and 1.6227 for these periods. These results demonstrate the effectiveness of the proposed model in predicting apartment transaction price index and its ability to forecast real estate market price fluctuations that reflect socio-economic trends.

Attention Capsule Network for Aspect-Level Sentiment Classification

  • Deng, Yu;Lei, Hang;Li, Xiaoyu;Lin, Yiou;Cheng, Wangchi;Yang, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1275-1292
    • /
    • 2021
  • As a fine-grained classification problem, aspect-level sentiment classification predicts the sentiment polarity for different aspects in context. To address this issue, researchers have widely used attention mechanisms to abstract the relationship between context and aspects. Still, it is difficult to effectively obtain a more profound semantic representation, and the strong correlation between local context features and the aspect-based sentiment is rarely considered. In this paper, a hybrid attention capsule network for aspect-level sentiment classification (ABASCap) was proposed. In this model, the multi-head self-attention was improved, and a context mask mechanism based on adjustable context window was proposed, so as to effectively obtain the internal association between aspects and context. Moreover, the dynamic routing algorithm and activation function in capsule network were optimized to meet the task requirements. Finally, sufficient experiments were conducted on three benchmark datasets in different domains. Compared with other baseline models, ABASCap achieved better classification results, and outperformed the state-of-the-art methods in this task after incorporating pre-training BERT.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

Transformer-Based MUM-T Situation Awareness: Agent Status Prediction (트랜스포머 기반 MUM-T 상황인식 기술: 에이전트 상태 예측)

  • Jaeuk Baek;Sungwoo Jun;Kwang-Yong Kim;Chang-Eun Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.436-443
    • /
    • 2023
  • With the advancement of robot intelligence, the concept of man and unmanned teaming (MUM-T) has garnered considerable attention in military research. In this paper, we present a transformer-based architecture for predicting the health status of agents, with the help of multi-head attention mechanism to effectively capture the dynamic interaction between friendly and enemy forces. To this end, we first introduce a framework for generating a dataset of battlefield situations. These situations are simulated on a virtual simulator, allowing for a wide range of scenarios without any restrictions on the number of agents, their missions, or their actions. Then, we define the crucial elements for identifying the battlefield, with a specific emphasis on agents' status. The battlefield data is fed into the transformer architecture, with classification headers on top of the transformer encoding layers to categorize health status of agent. We conduct ablation tests to assess the significance of various factors in determining agents' health status in battlefield scenarios. We conduct 3-Fold corss validation and the experimental results demonstrate that our model achieves a prediction accuracy of over 98%. In addition, the performance of our model are compared with that of other models such as convolutional neural network (CNN) and multi layer perceptron (MLP), and the results establish the superiority of our model.

A study on data augmentation methods for sound data classification (소리 데이터 분류에 대한 데이터 증대 방법 연구)

  • Chang, Il-Sik;Park, Goo-man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1308-1310
    • /
    • 2022
  • 소리 데이터 분류는 단순 소리를 통한 분류, 감정 인식등 다양한 연구가 진행중이다. 심층 신경망에서 데이터의 부족과 과적합 문제를 개선하는 방법으로 데이터 증강은 중요하다. 본 논문에서는 3가지의 소리데이터(UrbanSound8K, RAVDESS, IRMAS)를 사용하였으며, 소리데이터는 멜 스펙트로그램을 통한 변환과정을 거쳐 네트워크 망에 입력된다. 입력된 신호는 다양한 네크워크 신경망(Bidirection LSTM, Bidirection LSTM Attention, Multi-Head Attention, CNN)을 통해 학습되어지며, 각각의 네트워크 신경망에서 데이터 증강 전후의 분류 정확도를 확인 하였다. 다양한 데이터셋과 다양한 네트워크 망에서의 데이터 증강 방법의 결과 비교를 통한 통찰을 얻을수 있을 것이다.

  • PDF

Neural Question Difficulty Estimator with Bi-directional Attention in VideoQA (비디오 질의 응답 환경에서 양방향 어텐션을 이용한 질의 난이도 분석 모델)

  • Yoon, Su-Hwan;Park, Seong-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.501-506
    • /
    • 2020
  • 질의 난이도 분석 문제는 자연어 질의문을 답변할 때 어려움의 정도를 측정하는 문제이다. 질의 난이도 분석 문제는 문서 독해, 의학 시험, 비디오 질의 등과 같은 다양한 데이터셋에서 연구되어 왔다. 본 논문에서는 질의문과 질의문에 응답하기 위한 정보들 간의 관계를 파악하는 것으로 질의 난이도 분석 문제를 접근하여 이를 BERT와 Dual Multi-head Attention을 사용하여 모델링 하였다. 본 논문에서 제안하는 모델의 우수성을 증명하기 위하여 최근 자연언어이해 부분에서 높은 성능을 보여주는 기 학습 언어 모델과 이전 연구의 질의 난이도 분석 모델과의 성능을 비교하였고, 제안 모델은 대표적인 비디오 질의 응답 데이터셋인 DramaQA의 Memory Complexity에서 99.76%, Logical Complexity에서는 89.47%의 정확도로 가장 높은 질의 난이도 분석 성능을 보여주었다.

  • PDF

A statistical journey to DNN, the third trip: Language model and transformer (심층신경망으로 가는 통계 여행, 세 번째 여행: 언어모형과 트랜스포머)

  • Yu Jin Kim;In Jun Hwang;Kisuk Jang;Yoon Dong Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.5
    • /
    • pp.567-582
    • /
    • 2024
  • Over the past decade, the remarkable advancements in deep neural networks have paralleled the development and evolution of language models. Initially, language models were developed in the form of Encoder-Decoder models using early RNNs. However, with the introduction of Attention in 2015 and the emergence of the Transformer in 2017, the field saw revolutionary growth. This study briefly reviews the development process of language models and examines in detail the working mechanism and technical elements of the Transformer. Additionally, it explores statistical models and methodologies related to language models and the Transformer.

Korean Dependency Parsing using Multi-head Attention and Pointer Network (멀티헤드 어텐션과 포인터 네트워크를 이용한 한국어 의존 구문 분석)

  • Park, Seongsik;Oh, Shinhyeok;Kim, Hongjin;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.682-684
    • /
    • 2018
  • 구문 분석은 문장을 구성하는 단어들 간의 관계를 알아내 문장의 구조를 분석하는 작업이다. 구문 분석은 구구조 분석과 의존 구문 분석으로 나누어지는데 한국어처럼 어순이 자유로운 언어는 의존 구문 분석이 적합하다. 최근 구문 분석은 심층 신경망을 적용한 방식이 중점적으로 연구되고 있으며, 포인터 네트워크를 사용하는 모델이 가장 좋은 성능을 보였다. 그러나 포인터 네트워크만으로 구문적인 정보를 학습하기에는 한계가 있다. 본 논문에서는 멀티헤드 어텐션을 함께 사용하여 포인터 네트워크만을 사용 했을 때보다 높은 성능(UAS 92.85%, LAS 90.65%)을 보였다.

  • PDF