• Title/Summary/Keyword: LSTM network

Search Result 454, Processing Time 0.027 seconds

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

Deep Learning Model for Electric Power Demand Prediction Using Special Day Separation and Prediction Elements Extention (특수일 분리와 예측요소 확장을 이용한 전력수요 예측 딥 러닝 모델)

  • Park, Jun-Ho;Shin, Dong-Ha;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.4
    • /
    • pp.365-370
    • /
    • 2017
  • This study analyze correlation between weekdays data and special days data of different power demand patterns, and builds a separate data set, and suggests ways to reduce power demand prediction error by using deep learning network suitable for each data set. In addition, we propose a method to improve the prediction rate by adding the environmental elements and the separating element to the meteorological element, which is a basic power demand prediction elements. The entire data predicted power demand using LSTM which is suitable for learning time series data, and the special day data predicted power demand using DNN. The experiment result show that the prediction rate is improved by adding prediction elements other than meteorological elements. The average RMSE of the entire dataset was 0.2597 for LSTM and 0.5474 for DNN, indicating that the LSTM showed a good prediction rate. The average RMSE of the special day data set was 0.2201 for DNN, indicating that the DNN had better prediction than LSTM. The MAPE of the LSTM of the whole data set was 2.74% and the MAPE of the special day was 3.07 %.

Classification of Behavior of UTD Data using LSTM Technique (LSTM 기법을 적용한 UTD 데이터 행동 분류)

  • Jeung, Gyeo-wun;Ahn, Ji-min;Shin, Dong-in;Won, Geon;Park, Jong-bum
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.477-479
    • /
    • 2018
  • This study was carried out to utilize LSTM(Long Short-Term Memory) technique which is one kind of artificial neural network. Among the 27 types of motion data released by the UTD(University of Texas at Dallas), 3-axis acceleration and angular velocity data were applied to the basic LSTM and Deep Residual Bidir-LSTM techniques to classify the behavior.

  • PDF

Application of Informer for time-series NO2 prediction

  • Hye Yeon Sin;Minchul Kang;Joonsung Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.11-18
    • /
    • 2023
  • In this paper, we evaluate deep learning time series forecasting models. Recent studies show that those models perform better than the traditional prediction model such as ARIMA. Among them, recurrent neural networks to store previous information in the hidden layer are one of the prediction models. In order to solve the gradient vanishing problem in the network, LSTM is used with small memory inside the recurrent neural network along with BI-LSTM in which the hidden layer is added in the reverse direction of the data flow. In this paper, we compared the performance of Informer by comparing with other models (LSTM, BI-LSTM, and Transformer) for real Nitrogen dioxide (NO2) data. In order to evaluate the accuracy of each method, mean square root error and mean absolute error between the real value and the predicted value were obtained. Consequently, Informer has improved prediction accuracy compared with other methods.

Malware Detection Using Deep Recurrent Neural Networks with no Random Initialization

  • Amir Namavar Jahromi;Sattar Hashemi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.177-189
    • /
    • 2023
  • Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.

Using machine learning to forecast and assess the uncertainty in the response of a typical PWR undergoing a steam generator tube rupture accident

  • Tran Canh Hai Nguyen ;Aya Diab
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3423-3440
    • /
    • 2023
  • In this work, a multivariate time-series machine learning meta-model is developed to predict the transient response of a typical nuclear power plant (NPP) undergoing a steam generator tube rupture (SGTR). The model employs Recurrent Neural Networks (RNNs), including the Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a hybrid CNN-LSTM model. To address the uncertainty inherent in such predictions, a Bayesian Neural Network (BNN) was implemented. The models were trained using a database generated by the Best Estimate Plus Uncertainty (BEPU) methodology; coupling the thermal hydraulics code, RELAP5/SCDAP/MOD3.4 to the statistical tool, DAKOTA, to predict the variation in system response under various operational and phenomenological uncertainties. The RNN models successfully captures the underlying characteristics of the data with reasonable accuracy, and the BNN-LSTM approach offers an additional layer of insight into the level of uncertainty associated with the predictions. The results demonstrate that LSTM outperforms GRU, while the hybrid CNN-LSTM model is computationally the most efficient. This study aims to gain a better understanding of the capabilities and limitations of machine learning models in the context of nuclear safety. By expanding the application of ML models to more severe accident scenarios, where operators are under extreme stress and prone to errors, ML models can provide valuable support and act as expert systems to assist in decision-making while minimizing the chances of human error.

Multivariate Congestion Prediction using Stacked LSTM Autoencoder based Bidirectional LSTM Model

  • Vijayalakshmi, B;Thanga, Ramya S;Ramar, K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.216-238
    • /
    • 2023
  • In intelligent transportation systems, traffic management is an important task. The accurate forecasting of traffic characteristics like flow, congestion, and density is still active research because of the non-linear nature and uncertainty of the spatiotemporal data. Inclement weather, such as rain and snow, and other special events such as holidays, accidents, and road closures have a significant impact on driving and the average speed of vehicles on the road, which lowers traffic capacity and causes congestion in a widespread manner. This work designs a model for multivariate short-term traffic congestion prediction using SLSTM_AE-BiLSTM. The proposed design consists of a Bidirectional Long Short Term Memory(BiLSTM) network to predict traffic flow value and a Convolutional Neural network (CNN) model for detecting the congestion status. This model uses spatial static temporal dynamic data. The stacked Long Short Term Memory Autoencoder (SLSTM AE) is used to encode the weather features into a reduced and more informative feature space. BiLSTM model is used to capture the features from the past and present traffic data simultaneously and also to identify the long-term dependencies. It uses the traffic data and encoded weather data to perform the traffic flow prediction. The CNN model is used to predict the recurring congestion status based on the predicted traffic flow value at a particular urban traffic network. In this work, a publicly available Caltrans PEMS dataset with traffic parameters is used. The proposed model generates the congestion prediction with an accuracy rate of 92.74% which is slightly better when compared with other deep learning models for congestion prediction.

Prediction of rebound in shotcrete using deep bi-directional LSTM

  • Suzen, Ahmet A.;Cakiroglu, Melda A.
    • Computers and Concrete
    • /
    • v.24 no.6
    • /
    • pp.555-560
    • /
    • 2019
  • During the application of shotcrete, a part of the concrete bounces back after hitting to the surface, the reinforcement or previously sprayed concrete. This rebound material is definitely not added to the mixture and considered as waste. In this study, a deep neural network model was developed to predict the rebound material during shotcrete application. The factors affecting rebound and the datasets of these parameters were obtained from previous experiments. The Long Short-Term Memory (LSTM) architecture of the proposed deep neural network model was used in accordance with this data set. In the development of the proposed four-tier prediction model, the dataset was divided into 90% training and 10% test. The deep neural network was modeled with 11 dependents 1 independent data by determining the most appropriate hyper parameter values for prediction. Accuracy and error performance in success performance of LSTM model were evaluated over MSE and RMSE. A success of 93.2% was achieved at the end of training of the model and a success of 85.6% in the test. There was a difference of 7.6% between training and test. In the following stage, it is aimed to increase the success rate of the model by increasing the number of data in the data set with synthetic and experimental data. In addition, it is thought that prediction of the amount of rebound during dry-mix shotcrete application will provide economic gain as well as contributing to environmental protection.

River Water Level Prediction Method based on LSTM Neural Network

  • Le, Xuan Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.147-147
    • /
    • 2018
  • In this article, we use an open source software library: TensorFlow, developed for the purposes of conducting very complex machine learning and deep neural network applications. However, the system is general enough to be applicable in a wide variety of other domains as well. The proposed model based on a deep neural network model, LSTM (Long Short-Term Memory) to predict the river water level at Okcheon Station of the Guem River without utilization of rainfall - forecast information. For LSTM modeling, the input data is hourly water level data for 15 years from 2002 to 2016 at 4 stations includes 3 upstream stations (Sutong, Hotan, and Songcheon) and the forecasting-target station (Okcheon). The data are subdivided into three purposes: a training data set, a testing data set and a validation data set. The model was formulated to predict Okcheon Station water level for many cases from 3 hours to 12 hours of lead time. Although the model does not require many input data such as climate, geography, land-use for rainfall-runoff simulation, the prediction is very stable and reliable up to 9 hours of lead time with the Nash - Sutcliffe efficiency (NSE) is higher than 0.90 and the root mean square error (RMSE) is lower than 12cm. The result indicated that the method is able to produce the river water level time series and be applicable to the practical flood forecasting instead of hydrologic modeling approaches.

  • PDF

Real-Time Streaming Traffic Prediction Using Deep Learning Models Based on Recurrent Neural Network (순환 신경망 기반 딥러닝 모델들을 활용한 실시간 스트리밍 트래픽 예측)

  • Jinho, Kim;Donghyeok, An
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.53-60
    • /
    • 2023
  • Recently, the demand and traffic volume for various multimedia contents are rapidly increasing through real-time streaming platforms. In this paper, we predict real-time streaming traffic to improve the quality of service (QoS). Statistical models have been used to predict network traffic. However, since real-time streaming traffic changes dynamically, we used recurrent neural network-based deep learning models rather than a statistical model. Therefore, after the collection and preprocessing for real-time streaming data, we exploit vanilla RNN, LSTM, GRU, Bi-LSTM, and Bi-GRU models to predict real-time streaming traffic. In evaluation, the training time and accuracy of each model are measured and compared.