• Title/Summary/Keyword: LSTM(Long Short-Term Memory models)

Search Result 176, Processing Time 0.027 seconds

Reproduction of Long-term Memory in hydroclimatological variables using Deep Learning Model

  • Lee, Taesam;Tran, Trang Thi Kieu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.101-101
    • /
    • 2020
  • Traditional stochastic simulation of hydroclimatological variables often underestimates the variability and correlation structure of larger timescale due to the difficulty in preserving long-term memory. However, the Long Short-Term Memory (LSTM) model illustrates a remarkable long-term memory from the recursive hidden and cell states. The current study, therefore, employed the LSTM model in stochastic generation of hydrologic and climate variables to examine how much the LSTM model can preserve the long-term memory and overcome the drawbacks of conventional time series models such as autoregressive (AR). A trigonometric function and the Rössler system as well as real case studies for hydrological and climatological variables were tested. Results presented that the LSTM model reproduced the variability and correlation structure of the larger timescale as well as the key statistics of the original time domain better than the AR and other traditional models. The hidden and cell states of the LSTM containing the long-memory and oscillation structure following the observations allows better performance compared to the other tested conventional models. This good representation of the long-term variability can be important in water manager since future water resources planning and management is highly related with this long-term variability.

  • PDF

6-Parametric factor model with long short-term memory

  • Choi, Janghoon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.521-536
    • /
    • 2021
  • As life expectancies increase continuously over the world, the accuracy of forecasting mortality is more and more important to maintain social systems in the aging era. Currently, the most popular model used is the Lee-Carter model but various studies have been conducted to improve this model with one of them being 6-parametric factor model (6-PFM) which is introduced in this paper. To this new model, long short-term memory (LSTM) and regularized LSTM are applied in addition to vector autoregression (VAR), which is a traditional time-series method. Forecasting accuracies of several models, including the LC model, 4-PFM, 5-PFM, and 3 6-PFM's, are compared by using the U.S. and Korea life-tables. The results show that 6-PFM forecasts better than the other models (LC model, 4-PFM, and 5-PFM). Among the three 6-PFMs studied, regularized LSTM performs better than the other two methods for most of the tests.

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

Prediction of Highy Pathogenic Avian Influenza(HPAI) Diffusion Path Using LSTM (LSTM을 활용한 고위험성 조류인플루엔자(HPAI) 확산 경로 예측)

  • Choi, Dae-Woo;Lee, Won-Been;Song, Yu-Han;Kang, Tae-Hun;Han, Ye-Ji
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • The study was conducted with funding from the government (Ministry of Agriculture, Food and Rural Affairs) in 2018 with support from the Agricultural, Food, and Rural Affairs Agency, 318069-03-HD040, and in based on artificial intelligence-based HPAI spread analysis and patterning. The model that is actively used in time series and text mining recently is LSTM (Long Short-Term Memory Models) model utilizing deep learning model structure. The LSTM model is a model that emerged to resolve the Long-Term Dependency Problem that occurs during the Backpropagation Through Time (BPTT) process of RNN. LSTM models have resolved the problem of forecasting very well using variable sequence data, and are still widely used.In this paper study, we used the data of the Call Detailed Record (CDR) provided by KT to identify the migration path of people who are expected to be closely related to the virus. Introduce the results of predicting the path of movement by learning the LSTM model using the path of the person concerned. The results of this study could be used to predict the route of HPAI propagation and to select routes or areas to focus on quarantine and to reduce HPAI spread.

Forecasting the Wholesale Price of Farmed Olive Flounder Paralichthys olivaceus Using LSTM and GRU Models (LSTM (Long-short Term Memory)과 GRU (Gated Recurrent Units) 모델을 활용한 양식산 넙치 도매가격 예측 연구)

  • Ga-hyun Lee;Do-Hoon Kim
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.56 no.2
    • /
    • pp.243-252
    • /
    • 2023
  • Fluctuations in the price of aquaculture products have recently intensified. In particular, wholesale price fluctuations are adversely affecting consumers. Therefore, there is an emerging need for a study on forecasting the wholesale price of aquaculture products. The present study forecasted the wholesale price of olive flounder Paralichthys olivaceus, a representative farmed fish species in Korea, by constructing multivariate long-short term memory (LSTM) and gated recurrent unit (GRU) models. These deep learning models have recently been proven to be effective for forecasting in various fields. A total of 191 monthly data obtained for 17 variables were used to train and test the models. The results showed that the mean average percent error of LSTM and GRU models were 2.19% and 2.68%, respectively.

Cross-Domain Text Sentiment Classification Method Based on the CNN-BiLSTM-TE Model

  • Zeng, Yuyang;Zhang, Ruirui;Yang, Liang;Song, Sujuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.818-833
    • /
    • 2021
  • To address the problems of low precision rate, insufficient feature extraction, and poor contextual ability in existing text sentiment analysis methods, a mixed model account of a CNN-BiLSTM-TE (convolutional neural network, bidirectional long short-term memory, and topic extraction) model was proposed. First, Chinese text data was converted into vectors through the method of transfer learning by Word2Vec. Second, local features were extracted by the CNN model. Then, contextual information was extracted by the BiLSTM neural network and the emotional tendency was obtained using softmax. Finally, topics were extracted by the term frequency-inverse document frequency and K-means. Compared with the CNN, BiLSTM, and gate recurrent unit (GRU) models, the CNN-BiLSTM-TE model's F1-score was higher than other models by 0.0147, 0.006, and 0.0052, respectively. Then compared with CNN-LSTM, LSTM-CNN, and BiLSTM-CNN models, the F1-score was higher by 0.0071, 0.0038, and 0.0049, respectively. Experimental results showed that the CNN-BiLSTM-TE model can effectively improve various indicators in application. Lastly, performed scalability verification through a takeaway dataset, which has great value in practical applications.

Long-term runoff simulation using rainfall LSTM-MLP artificial neural network ensemble (LSTM - MLP 인공신경망 앙상블을 이용한 장기 강우유출모의)

  • An, Sungwook;Kang, Dongho;Sung, Janghyun;Kim, Byungsik
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.127-137
    • /
    • 2024
  • Physical models, which are often used for water resource management, are difficult to build and operate with input data and may involve the subjective views of users. In recent years, research using data-driven models such as machine learning has been actively conducted to compensate for these problems in the field of water resources, and in this study, an artificial neural network was used to simulate long-term rainfall runoff in the Osipcheon watershed in Samcheok-si, Gangwon-do. For this purpose, three input data groups (meteorological observations, daily precipitation and potential evapotranspiration, and daily precipitation - potential evapotranspiration) were constructed from meteorological data, and the results of training the LSTM (Long Short-term Memory) artificial neural network model were compared and analyzed. As a result, the performance of LSTM-Model 1 using only meteorological observations was the highest, and six LSTM-MLP ensemble models with MLP artificial neural networks were built to simulate long-term runoff in the Fifty Thousand Watershed. The comparison between the LSTM and LSTM-MLP models showed that both models had generally similar results, but the MAE, MSE, and RMSE of LSTM-MLP were reduced compared to LSTM, especially in the low-flow part. As the results of LSTM-MLP show an improvement in the low-flow part, it is judged that in the future, in addition to the LSTM-MLP model, various ensemble models such as CNN can be used to build physical models and create sulfur curves in large basins that take a long time to run and unmeasured basins that lack input data.

Development of a Prediction Model of Solar Irradiances Using LSTM for Use in Building Predictive Control (건물 예측 제어용 LSTM 기반 일사 예측 모델)

  • Jeon, Byung-Ki;Lee, Kyung-Ho;Kim, Eui-Jong
    • Journal of the Korean Solar Energy Society
    • /
    • v.39 no.5
    • /
    • pp.41-52
    • /
    • 2019
  • The purpose of the work is to develop a simple solar irradiance prediction model using a deep learning method, the LSTM (long term short term memory). Other than existing prediction models, the proposed one uses only the cloudiness among the information forecasted from the national meterological forecast center. The future cloudiness is generally announced with four categories and for three-hour intervals. In this work, a daily irradiance pattern is used as an input vector to the LSTM together with that cloudiness information. The proposed model showed an error of 5% for learning and 30% for prediction. This level of error has lower influence on the load prediction in typical building cases.

Development of Deep Learning Models for Multi-class Sentiment Analysis (딥러닝 기반의 다범주 감성분석 모델 개발)

  • Syaekhoni, M. Alex;Seo, Sang Hyun;Kwon, Young S.
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.149-160
    • /
    • 2017
  • Sentiment analysis is the process of determining whether a piece of document, text or conversation is positive, negative, neural or other emotion. Sentiment analysis has been applied for several real-world applications, such as chatbot. In the last five years, the practical use of the chatbot has been prevailing in many field of industry. In the chatbot applications, to recognize the user emotion, sentiment analysis must be performed in advance in order to understand the intent of speakers. The specific emotion is more than describing positive or negative sentences. In light of this context, we propose deep learning models for conducting multi-class sentiment analysis for identifying speaker's emotion which is categorized to be joy, fear, guilt, sad, shame, disgust, and anger. Thus, we develop convolutional neural network (CNN), long short term memory (LSTM), and multi-layer neural network models, as deep neural networks models, for detecting emotion in a sentence. In addition, word embedding process was also applied in our research. In our experiments, we have found that long short term memory (LSTM) model performs best compared to convolutional neural networks and multi-layer neural networks. Moreover, we also show the practical applicability of the deep learning models to the sentiment analysis for chatbot.

Analysis of streamflow prediction performance by various deep learning schemes

  • Le, Xuan-Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.131-131
    • /
    • 2021
  • Deep learning models, especially those based on long short-term memory (LSTM), have presented their superiority in addressing time series data issues recently. This study aims to comprehensively evaluate the performance of deep learning models that belong to the supervised learning category in streamflow prediction. Therefore, six deep learning models-standard LSTM, standard gated recurrent unit (GRU), stacked LSTM, bidirectional LSTM (BiLSTM), feed-forward neural network (FFNN), and convolutional neural network (CNN) models-were of interest in this study. The Red River system, one of the largest river basins in Vietnam, was adopted as a case study. In addition, deep learning models were designed to forecast flowrate for one- and two-day ahead at Son Tay hydrological station on the Red River using a series of observed flowrate data at seven hydrological stations on three major river branches of the Red River system-Thao River, Da River, and Lo River-as the input data for training, validation, and testing. The comparison results have indicated that the four LSTM-based models exhibit significantly better performance and maintain stability than the FFNN and CNN models. Moreover, LSTM-based models may reach impressive predictions even in the presence of upstream reservoirs and dams. In the case of the stacked LSTM and BiLSTM models, the complexity of these models is not accompanied by performance improvement because their respective performance is not higher than the two standard models (LSTM and GRU). As a result, we realized that in the context of hydrological forecasting problems, simple architectural models such as LSTM and GRU (with one hidden layer) are sufficient to produce highly reliable forecasts while minimizing computation time because of the sequential data nature.

  • PDF