• Title/Summary/Keyword: Transformer-Encoder

Search Result 45, Processing Time 0.123 seconds

A Study On Still Image Codig With the TMS320C80 (TMS320C80을 이용한 정지 영상 부호화에 관한 연구)

  • Kim, Sang-Gi;Jeong, Jin-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1106-1111
    • /
    • 1999
  • Discrete cosine Transform (DCT) is most popular block transform coding in lossy mode. DCT is close to statistically optimal transform - the Karhunen Loeve transform. In this paper, a module for still image encoder is implemented with TMS320C80 based on JPEG, which are international standards for image compression. Th still image encoder consists of three parts- a transformer, a vector quantizer and an entropy encoder.

  • PDF

Multi-stage Transformer for Video Anomaly Detection

  • Viet-Tuan Le;Khuong G. T. Diep;Tae-Seok Kim;Yong-Guk Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.648-651
    • /
    • 2023
  • Video anomaly detection aims to detect abnormal events. Motivated by the power of transformers recently shown in vision tasks, we propose a novel transformer-based network for video anomaly detection. To capture long-range information in video, we employ a multi-scale transformer as an encoder. A convolutional decoder is utilized to predict the future frame from the extracted multi-scale feature maps. The proposed method is evaluated on three benchmark datasets: USCD Ped2, CUHK Avenue, and ShanghaiTech. The results show that the proposed method achieves better performance compared to recent methods.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

Network Intrusion Detection Using Transformer and BiGRU-DNN in Edge Computing

  • Huijuan Sun
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.458-476
    • /
    • 2024
  • To address the issue of class imbalance in network traffic data, which affects the network intrusion detection performance, a combined framework using transformers is proposed. First, Tomek Links, SMOTE, and WGAN are used to preprocess the data to solve the class-imbalance problem. Second, the transformer is used to encode traffic data to extract the correlation between network traffic. Finally, a hybrid deep learning network model combining a bidirectional gated current unit and deep neural network is proposed, which is used to extract long-dependence features. A DNN is used to extract deep level features, and softmax is used to complete classification. Experiments were conducted on the NSLKDD, UNSWNB15, and CICIDS2017 datasets, and the detection accuracy rates of the proposed model were 99.72%, 84.86%, and 99.89% on three datasets, respectively. Compared with other relatively new deep-learning network models, it effectively improved the intrusion detection performance, thereby improving the communication security of network data.

Artificial intelligence application UX/UI study for language learning of children with articulation disorder (조음장애 아동의 언어학습을 위한 인공지능 애플리케이션 UX/UI 연구)

  • Yang, Eun-mi;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.174-176
    • /
    • 2022
  • In this paper, we present a mobile application for 'personalized customized learning' for children with articulation disorders using an artificial intelligence (AI) algorithm. A dataset (Data Set) to analyze, judge, and predict the learner's articulation situation and degree. In particular, we designed a prototype model by looking at how AI can be improved and advanced compared to existing applications from the UX/UI (GUI) aspect. So far, the focus has been on visual experience, but now it is an important time to process data and provide a UX/UI (GUI) experience to users. The UX/UI (GUI) of the proposed mobile application was to be provided according to the learner's articulation level and situation by using CRNN (Convolution Recurrent Neural Network) of DeepLearning and Auto Encoder GPT-3 (Generative Pretrained Transformer). The use of artificial intelligence algorithms will provide a learning environment with a high degree of perfection to children with articulation disorders, thereby enhancing the learning effect. I hope that you do not have any fear or discomfort in conversation by improving the perfection of articulation with 'personalized and customized learning'.

  • PDF

Prediction of multipurpose dam inflow utilizing catchment attributes with LSTM and transformer models (유역정보 기반 Transformer및 LSTM을 활용한 다목적댐 일 단위 유입량 예측)

  • Kim, Hyung Ju;Song, Young Hoon;Chung, Eun Sung
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.437-449
    • /
    • 2024
  • Rainfall-runoff prediction studies using deep learning while considering catchment attributes have been gaining attention. In this study, we selected two models: the Transformer model, which is suitable for large-scale data training through the self-attention mechanism, and the LSTM-based multi-state-vector sequence-to-sequence (LSTM-MSV-S2S) model with an encoder-decoder structure. These models were constructed to incorporate catchment attributes and predict the inflow of 10 multi-purpose dam watersheds in South Korea. The experimental design consisted of three training methods: Single-basin Training (ST), Pretraining (PT), and Pretraining-Finetuning (PT-FT). The input data for the models included 10 selected watershed attributes along with meteorological data. The inflow prediction performance was compared based on the training methods. The results showed that the Transformer model outperformed the LSTM-MSV-S2S model when using the PT and PT-FT methods, with the PT-FT method yielding the highest performance. The LSTM-MSV-S2S model showed better performance than the Transformer when using the ST method; however, it showed lower performance when using the PT and PT-FT methods. Additionally, the embedding layer activation vectors and raw catchment attributes were used to cluster watersheds and analyze whether the models learned the similarities between them. The Transformer model demonstrated improved performance among watersheds with similar activation vectors, proving that utilizing information from other pre-trained watersheds enhances the prediction performance. This study compared the suitable models and training methods for each multi-purpose dam and highlighted the necessity of constructing deep learning models using PT and PT-FT methods for domestic watersheds. Furthermore, the results confirmed that the Transformer model outperforms the LSTM-MSV-S2S model when applying PT and PT-FT methods.

Korean CSAT Problem Solving with KoBigBird (KoBigBird를 활용한 수능 국어 문제풀이 모델)

  • Park, Nam-Jun;Kim, Jaekwang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.207-210
    • /
    • 2022
  • 최근 자연어 처리 분야에서 기계학습 독해 관련 연구가 활발하게 이루어지고 있다. 그러나 그 중에서 한국어 기계독해 학습을 통해 문제풀이에 적용한 사례를 찾아보기 힘들었다. 기존 연구에서도 수능 영어와 수능 수학 문제를 인공지능(AI) 모델을 활용하여 문제풀이에 적용했던 사례는 있었지만, 수능 국어에 이를 적용하였던 사례는 존재하지 않았다. 또한, 수능 영어와 수능 수학 문제를 AI 문제풀이를 통해 도출한 결괏값이 각각 12점, 16점으로 객관식이라는 수능의 특수성을 고려했을 때 기대에 못 미치는 결과를 나타냈다. 이에 본 논문은 한국어 기계독해 데이터셋을 트랜스포머(Transformer) 기반 모델에 학습하여 수능 국어 문제 풀이에 적용하였다. 이를 위해 객관식으로 이루어진 수능 문항의 각각의 선택지들을 질문 형태로 변형하여 모델이 답을 도출해낼 수 있도록 데이터셋을 변형하였다. 또한 BERT(Bidirectional Encoder Representations from Transformer)가 가진 입력값 개수의 한계를 극복하기 위해 더 큰 입력값을 처리할 수 있는 트랜스포머 기반 모델 중에서 한국어 기계독해 학습에 적합한 KoBigBird를 사전학습모델로 설정하여 성능을 높였다.

  • PDF

Semantic Similarity Calculation based on Siamese TRAT (트랜스포머 인코더와 시암넷 결합한 시맨틱 유사도 알고리즘)

  • Lu, Xing-Cen;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.397-400
    • /
    • 2021
  • To solve the problem that existing computing methods cannot adequately represent the semantic features of sentences, Siamese TRAT, a semantic feature extraction model based on Transformer encoder is proposed. The transformer model is used to fully extract the semantic information within sentences and carry out deep semantic coding for sentences. In addition, the interactive attention mechanism is introduced to extract the similar features of the association between two sentences, which makes the model better at capturing the important semantic information inside the sentence. As a result, it improves the semantic understanding and generalization ability of the model. The experimental results show that the proposed model can improve the accuracy significantly for the semantic similarity calculation task of English and Chinese, and is more effective than the existing methods.

The implementation of the color component 2-D DWT Processor for the JPEG 2000 hard-wired encoder (JPEG 2000 Hard-wired Encoder를 위한 칼라 2-D DWT Processor의 구현)

  • Lee, Sung-Mok;Cho, Sung-Dae;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.4
    • /
    • pp.321-328
    • /
    • 2008
  • In this paper, we propose the hardware architecture of two-dimensional discrete wavelet transform (2D DWT) and quantization for using JPEG2000. Color 2-D DWT processor is proposed that is to apply to JPEG 2000 Hard-wired Encoder. JPEG 2000 DWT processor uses the Daubechies' (9,7) bi-orthogonal filter, and we design by minimizing error of the DWT transformer by ${\pm}1$ LSB during compression and decompression. We designed the DWT filters that using by using shift and adder structure instead of multiplier structure which raise the hardware complexity. It is improve the operation speed of filters and reduce the hardware complexity. The proposed system is designed by the hardware description language Verilog-HDL and verified by Synopsys Design Analyzer using TSMC 0.25${\mu}m$ ASIC library.

  • PDF

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.