• Title/Summary/Keyword: Transformer Encoder

Search Result 49, Processing Time 0.026 seconds

Time-Series Forecasting Based on Multi-Layer Attention Architecture

  • Na Wang;Xianglian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • Time-series forecasting is extensively used in the actual world. Recent research has shown that Transformers with a self-attention mechanism at their core exhibit better performance when dealing with such problems. However, most of the existing Transformer models used for time series prediction use the traditional encoder-decoder architecture, which is complex and leads to low model processing efficiency, thus limiting the ability to mine deep time dependencies by increasing model depth. Secondly, the secondary computational complexity of the self-attention mechanism also increases computational overhead and reduces processing efficiency. To address these issues, the paper designs an efficient multi-layer attention-based time-series forecasting model. This model has the following characteristics: (i) It abandons the traditional encoder-decoder based Transformer architecture and constructs a time series prediction model based on multi-layer attention mechanism, improving the model's ability to mine deep time dependencies. (ii) A cross attention module based on cross attention mechanism was designed to enhance information exchange between historical and predictive sequences. (iii) Applying a recently proposed sparse attention mechanism to our model reduces computational overhead and improves processing efficiency. Experiments on multiple datasets have shown that our model can significantly increase the performance of current advanced Transformer methods in time series forecasting, including LogTrans, Reformer, and Informer.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

Syllable-Level Lightweight Korean POS Tagger using Transformer Encoder (트랜스포머 인코더를 활용한 음절 단위 경량화 형태소 분석기)

  • Suyoung Min;Youngjoong Ko
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.10
    • /
    • pp.553-558
    • /
    • 2024
  • Morphological analysis involves segmenting morphemes, the smallest units of meaning or grammatical function in a language, and assigning part-of-speech tags to each morpheme. It plays a critical role in various natural language processing tasks, such as named entity recognition and dependency parsing. Much of modern natural language processing relies on deep learning-based language models, and Korean morphological analysis can be broadly categorized into sequence-to-sequence methods and sequential labeling methods. This study proposes a morphological analysis approach using the transformer encoder for sequential labeling to perform syllable-level part-of-speech tagging, followed by morpheme restoration and tagging through a pre-analyzed dictionary. Additionally, the CBOW method was used to extract syllable-level embeddings in lower dimensions, designing a lightweight morphological analyzer model with reduced parameters. The proposed model achieves fast inference speed and low parameter usage, making it efficient for use in resource-constrained environments.

A Study On Still Image Codig With the TMS320C80 (TMS320C80을 이용한 정지 영상 부호화에 관한 연구)

  • Kim, Sang-Gi;Jeong, Jin-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1106-1111
    • /
    • 1999
  • Discrete cosine Transform (DCT) is most popular block transform coding in lossy mode. DCT is close to statistically optimal transform - the Karhunen Loeve transform. In this paper, a module for still image encoder is implemented with TMS320C80 based on JPEG, which are international standards for image compression. Th still image encoder consists of three parts- a transformer, a vector quantizer and an entropy encoder.

  • PDF

Multi-stage Transformer for Video Anomaly Detection

  • Viet-Tuan Le;Khuong G. T. Diep;Tae-Seok Kim;Yong-Guk Kim
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.648-651
    • /
    • 2023
  • Video anomaly detection aims to detect abnormal events. Motivated by the power of transformers recently shown in vision tasks, we propose a novel transformer-based network for video anomaly detection. To capture long-range information in video, we employ a multi-scale transformer as an encoder. A convolutional decoder is utilized to predict the future frame from the extracted multi-scale feature maps. The proposed method is evaluated on three benchmark datasets: USCD Ped2, CUHK Avenue, and ShanghaiTech. The results show that the proposed method achieves better performance compared to recent methods.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

Network Intrusion Detection Using Transformer and BiGRU-DNN in Edge Computing

  • Huijuan Sun
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.458-476
    • /
    • 2024
  • To address the issue of class imbalance in network traffic data, which affects the network intrusion detection performance, a combined framework using transformers is proposed. First, Tomek Links, SMOTE, and WGAN are used to preprocess the data to solve the class-imbalance problem. Second, the transformer is used to encode traffic data to extract the correlation between network traffic. Finally, a hybrid deep learning network model combining a bidirectional gated current unit and deep neural network is proposed, which is used to extract long-dependence features. A DNN is used to extract deep level features, and softmax is used to complete classification. Experiments were conducted on the NSLKDD, UNSWNB15, and CICIDS2017 datasets, and the detection accuracy rates of the proposed model were 99.72%, 84.86%, and 99.89% on three datasets, respectively. Compared with other relatively new deep-learning network models, it effectively improved the intrusion detection performance, thereby improving the communication security of network data.

Artificial intelligence application UX/UI study for language learning of children with articulation disorder (조음장애 아동의 언어학습을 위한 인공지능 애플리케이션 UX/UI 연구)

  • Yang, Eun-mi;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.174-176
    • /
    • 2022
  • In this paper, we present a mobile application for 'personalized customized learning' for children with articulation disorders using an artificial intelligence (AI) algorithm. A dataset (Data Set) to analyze, judge, and predict the learner's articulation situation and degree. In particular, we designed a prototype model by looking at how AI can be improved and advanced compared to existing applications from the UX/UI (GUI) aspect. So far, the focus has been on visual experience, but now it is an important time to process data and provide a UX/UI (GUI) experience to users. The UX/UI (GUI) of the proposed mobile application was to be provided according to the learner's articulation level and situation by using CRNN (Convolution Recurrent Neural Network) of DeepLearning and Auto Encoder GPT-3 (Generative Pretrained Transformer). The use of artificial intelligence algorithms will provide a learning environment with a high degree of perfection to children with articulation disorders, thereby enhancing the learning effect. I hope that you do not have any fear or discomfort in conversation by improving the perfection of articulation with 'personalized and customized learning'.

  • PDF

Prediction of multipurpose dam inflow utilizing catchment attributes with LSTM and transformer models (유역정보 기반 Transformer및 LSTM을 활용한 다목적댐 일 단위 유입량 예측)

  • Kim, Hyung Ju;Song, Young Hoon;Chung, Eun Sung
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.437-449
    • /
    • 2024
  • Rainfall-runoff prediction studies using deep learning while considering catchment attributes have been gaining attention. In this study, we selected two models: the Transformer model, which is suitable for large-scale data training through the self-attention mechanism, and the LSTM-based multi-state-vector sequence-to-sequence (LSTM-MSV-S2S) model with an encoder-decoder structure. These models were constructed to incorporate catchment attributes and predict the inflow of 10 multi-purpose dam watersheds in South Korea. The experimental design consisted of three training methods: Single-basin Training (ST), Pretraining (PT), and Pretraining-Finetuning (PT-FT). The input data for the models included 10 selected watershed attributes along with meteorological data. The inflow prediction performance was compared based on the training methods. The results showed that the Transformer model outperformed the LSTM-MSV-S2S model when using the PT and PT-FT methods, with the PT-FT method yielding the highest performance. The LSTM-MSV-S2S model showed better performance than the Transformer when using the ST method; however, it showed lower performance when using the PT and PT-FT methods. Additionally, the embedding layer activation vectors and raw catchment attributes were used to cluster watersheds and analyze whether the models learned the similarities between them. The Transformer model demonstrated improved performance among watersheds with similar activation vectors, proving that utilizing information from other pre-trained watersheds enhances the prediction performance. This study compared the suitable models and training methods for each multi-purpose dam and highlighted the necessity of constructing deep learning models using PT and PT-FT methods for domestic watersheds. Furthermore, the results confirmed that the Transformer model outperforms the LSTM-MSV-S2S model when applying PT and PT-FT methods.

Korean CSAT Problem Solving with KoBigBird (KoBigBird를 활용한 수능 국어 문제풀이 모델)

  • Park, Nam-Jun;Kim, Jaekwang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.207-210
    • /
    • 2022
  • 최근 자연어 처리 분야에서 기계학습 독해 관련 연구가 활발하게 이루어지고 있다. 그러나 그 중에서 한국어 기계독해 학습을 통해 문제풀이에 적용한 사례를 찾아보기 힘들었다. 기존 연구에서도 수능 영어와 수능 수학 문제를 인공지능(AI) 모델을 활용하여 문제풀이에 적용했던 사례는 있었지만, 수능 국어에 이를 적용하였던 사례는 존재하지 않았다. 또한, 수능 영어와 수능 수학 문제를 AI 문제풀이를 통해 도출한 결괏값이 각각 12점, 16점으로 객관식이라는 수능의 특수성을 고려했을 때 기대에 못 미치는 결과를 나타냈다. 이에 본 논문은 한국어 기계독해 데이터셋을 트랜스포머(Transformer) 기반 모델에 학습하여 수능 국어 문제 풀이에 적용하였다. 이를 위해 객관식으로 이루어진 수능 문항의 각각의 선택지들을 질문 형태로 변형하여 모델이 답을 도출해낼 수 있도록 데이터셋을 변형하였다. 또한 BERT(Bidirectional Encoder Representations from Transformer)가 가진 입력값 개수의 한계를 극복하기 위해 더 큰 입력값을 처리할 수 있는 트랜스포머 기반 모델 중에서 한국어 기계독해 학습에 적합한 KoBigBird를 사전학습모델로 설정하여 성능을 높였다.

  • PDF