• Title/Summary/Keyword: AttentionBiLSTM

Search Result 24, Processing Time 0.022 seconds

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

Encoding Dictionary Feature for Deep Learning-based Named Entity Recognition

  • Ronran, Chirawan;Unankard, Sayan;Lee, Seungwoo
    • International Journal of Contents
    • /
    • v.17 no.4
    • /
    • pp.1-15
    • /
    • 2021
  • Named entity recognition (NER) is a crucial task for NLP, which aims to extract information from texts. To build NER systems, deep learning (DL) models are learned with dictionary features by mapping each word in the dataset to dictionary features and generating a unique index. However, this technique might generate noisy labels, which pose significant challenges for the NER task. In this paper, we proposed DL-dictionary features, and evaluated them on two datasets, including the OntoNotes 5.0 dataset and our new infectious disease outbreak dataset named GFID. We used (1) a Bidirectional Long Short-Term Memory (BiLSTM) character and (2) pre-trained embedding to concatenate with (3) our proposed features, named the Convolutional Neural Network (CNN), BiLSTM, and self-attention dictionaries, respectively. The combined features (1-3) were fed through BiLSTM - Conditional Random Field (CRF) to predict named entity classes as outputs. We compared these outputs with other predictions of the BiLSTM character, pre-trained embedding, and dictionary features from previous research, which used the exact matching and partial matching dictionary technique. The findings showed that the model employing our dictionary features outperformed other models that used existing dictionary features. We also computed the F1 score with the GFID dataset to apply this technique to extract medical or healthcare information.

Comparative Analysis of RNN Architectures and Activation Functions with Attention Mechanisms for Mars Weather Prediction

  • Jaehyeok Jo;Yunho Sin;Bo-Young Kim;Jihoon Moon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.1-9
    • /
    • 2024
  • In this paper, we propose a comparative analysis to evaluate the impact of activation functions and attention mechanisms on the performance of time-series models for Mars meteorological data. Mars meteorological data are nonlinear and irregular due to low atmospheric density, rapid temperature variations, and complex terrain. We use long short-term memory (LSTM), bidirectional LSTM (BiLSTM), gated recurrent unit (GRU), and bidirectional GRU (BiGRU) architectures to evaluate the effectiveness of different activation functions and attention mechanisms. The activation functions tested include rectified linear unit (ReLU), leaky ReLU, exponential linear unit (ELU), Gaussian error linear unit (GELU), Swish, and scaled ELU (SELU), and model performance was measured using mean absolute error (MAE) and root mean square error (RMSE) metrics. Our results show that the integration of attentional mechanisms improves both MAE and RMSE, with Swish and ReLU achieving the best performance for minimum temperature prediction. Conversely, GELU and ELU were less effective for pressure prediction. These results highlight the critical role of selecting appropriate activation functions and attention mechanisms in improving model accuracy for complex time-series forecasting.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Applying a Novel Neuroscience Mining (NSM) Method to fNIRS Dataset for Predicting the Business Problem Solving Creativity: Emphasis on Combining CNN, BiLSTM, and Attention Network

  • Kim, Kyu Sung;Kim, Min Gyeong;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.1-7
    • /
    • 2022
  • With the development of artificial intelligence, efforts to incorporate neuroscience mining with AI have increased. Neuroscience mining, also known as NSM, expands on this concept by combining computational neuroscience and business analytics. Using fNIRS (functional near-infrared spectroscopy)-based experiment dataset, we have investigated the potential of NSM in the context of the BPSC (business problem-solving creativity) prediction. Although BPSC is regarded as an essential business differentiator and a difficult cognitive resource to imitate, measuring it is a challenging task. In the context of NSM, appropriate methods for assessing and predicting BPSC are still in their infancy. In this sense, we propose a novel NSM method that systematically combines CNN, BiLSTM, and attention network for the sake of enhancing the BPSC prediction performance significantly. We utilized a dataset containing over 150 thousand fNIRS-measured data points to evaluate the validity of our proposed NSM method. Empirical evidence demonstrates that the proposed NSM method reveals the most robust performance when compared to benchmarking methods.

Predicting Stock Prices Based on Online News Content and Technical Indicators by Combinatorial Analysis Using CNN and LSTM with Self-attention

  • Sang Hyung Jung;Gyo Jung Gu;Dongsung Kim;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.30 no.4
    • /
    • pp.719-740
    • /
    • 2020
  • The stock market changes continuously as new information emerges, affecting the judgments of investors. Online news articles are valued as a traditional window to inform investors about various information that affects the stock market. This paper proposed new ways to utilize online news articles with technical indicators. The suggested hybrid model consists of three models. First, a self-attention-based convolutional neural network (CNN) model, considered to be better in interpreting the semantics of long texts, uses news content as inputs. Second, a self-attention-based, bi-long short-term memory (bi-LSTM) neural network model for short texts utilizes news titles as inputs. Third, a bi-LSTM model, considered to be better in analyzing context information and time-series models, uses 19 technical indicators as inputs. We used news articles from the previous day and technical indicators from the past seven days to predict the share price of the next day. An experiment was performed with Korean stock market data and news articles from 33 top companies over three years. Through this experiment, our proposed model showed better performance than previous approaches, which have mainly focused on news titles. This paper demonstrated that news titles and content should be treated in different ways for superior stock price prediction.

MalEXLNet:A semantic analysis and detection method of malware API sequence based on EXLNet model

  • Xuedong Mao;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.10
    • /
    • pp.3060-3083
    • /
    • 2024
  • With the continuous advancements in malicious code polymorphism and obfuscation techniques, the performance of traditional machine learning-based detection methods for malware variant detection has gradually declined. Additionally, conventional pre-trained models could adequately capture the contextual semantic information of malicious code and appropriately represent polysemous words. To enhance the efficiency of malware variant detection, this paper proposes the MalEXLNet intelligent semantic analysis and detection architecture for malware. This architecture leverages malware API call sequences and employs an improved pre-training model for semantic vector representation, effectively utilizing the semantic information of API call sequences. It constructs a hybrid deep learning model, CBAM+AttentionBiLSTM,CBAM+AttentionBiLSTM, for training and classification prediction. Furthermore, incorporating the KMeansSMOTE algorithm achieves balanced processing of small sample data, ensuring the model maintains robust performance in detecting malicious variants from rare malware families. Comparative experiments on generalized datasets, Ember and Catak, the results show that the proposed MalEXLNet architecture achieves excellent performance in malware classification and detection tasks, with accuracies of 98.85% and 94.46% in the two datasets, and macro-averaged and micro-averaged metrics exceeding 98% and 92%, respectively.

A Study on Image Generation from Sentence Embedding Applying Self-Attention (Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구)

  • Yu, Kyungho;No, Juhyeon;Hong, Taekeun;Kim, Hyeong-Ju;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.63-69
    • /
    • 2021
  • When a person sees a sentence and understands the sentence, the person understands the sentence by reminiscent of the main word in the sentence as an image. Text-to-image is what allows computers to do this associative process. The previous deep learning-based text-to-image model extracts text features using Convolutional Neural Network (CNN)-Long Short Term Memory (LSTM) and bi-directional LSTM, and generates an image by inputting it to the GAN. The previous text-to-image model uses basic embedding in text feature extraction, and it takes a long time to train because images are generated using several modules. Therefore, in this research, we propose a method of extracting features by using the attention mechanism, which has improved performance in the natural language processing field, for sentence embedding, and generating an image by inputting the extracted features into the GAN. As a result of the experiment, the inception score was higher than that of the model used in the previous study, and when judged with the naked eye, an image that expresses the features well in the input sentence was created. In addition, even when a long sentence is input, an image that expresses the sentence well was created.

Video Saliency Detection Using Bi-directional LSTM

  • Chi, Yang;Li, Jinjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2444-2463
    • /
    • 2020
  • Significant detection of video can more rationally allocate computing resources and reduce the amount of computation to improve accuracy. Deep learning can extract the edge features of the image, providing technical support for video saliency. This paper proposes a new detection method. We combine the Convolutional Neural Network (CNN) and the Deep Bidirectional LSTM Network (DB-LSTM) to learn the spatio-temporal features by exploring the object motion information and object motion information to generate video. A continuous frame of significant images. We also analyzed the sample database and found that human attention and significant conversion are time-dependent, so we also considered the significance detection of video cross-frame. Finally, experiments show that our method is superior to other advanced methods.

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.