• Title/Summary/Keyword: LSTM algorithm

Search Result 201, Processing Time 0.028 seconds

Fundamental Study on Algorithm Development for Prediction of Smoke Spread Distance Based on Deep Learning (딥러닝 기반의 연기 확산거리 예측을 위한 알고리즘 개발 기초연구)

  • Kim, Byeol;Hwang, Kwang-Il
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.22-28
    • /
    • 2021
  • This is a basic study on the development of deep learning-based algorithms to detect smoke before the smoke detector operates in the event of a ship fire, analyze and utilize the detected data, and support fire suppression and evacuation activities by predicting the spread of smoke before it spreads to remote areas. Proposed algorithms were reviewed in accordance with the following procedures. As a first step, smoke images obtained through fire simulation were applied to the YOLO (You Only Look Once) model, which is a deep learning-based object detection algorithm. The mean average precision (mAP) of the trained YOLO model was measured to be 98.71%, and smoke was detected at a processing speed of 9 frames per second (FPS). The second step was to estimate the spread of smoke using the coordinates of the boundary box, from which was utilized to extract the smoke geometry from YOLO. This smoke geometry was then applied to the time series prediction algorithm, long short-term memory (LSTM). As a result, smoke spread data obtained from the coordinates of the boundary box between the estimated fire occurrence and 30 s were entered into the LSTM learning model to predict smoke spread data from 31 s to 90 s in the smoke image of a fast fire obtained from fire simulation. The average square root error between the estimated spread of smoke and its predicted value was 2.74.

Prediction of groundwater level in the middle mountainous area of Pyoseon Watershed in Jeju Island using deep learning algorithm, LSTM (딥러닝 알고리즘 LSTM을 활용한 제주도 표선유역 중산간지역의 지하수위 예측)

  • Shin, Mun-Ju;Moon, Soo-Hyoung;Moon, Duk Chul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.291-291
    • /
    • 2020
  • 제주도는 강수의 지표침투성이 좋은 화산섬의 지질특성상 지표수의 개발이용여건이 취약한 관계로 용수의 대부분을 지하수에 의존하고 있다. 따라서 제주도는 정책 및 연구적으로 오랜 기간동안 지하수의 보전관리에 많은 노력을 기울여 오고 있다. 하지만 최근 기후변화로 인한 강수의 변동성 증가로 인해 지하수위의 변동성 또한 증가할 가능성이 있으며 따라서 지하수위의 급격한 하강에 대비하여 지하수위의 예측 및 지하수 취수량 관리의 필요성이 요구되고 있다. 지하수에 절대적으로 의존하고 있는 제주도의 수자원 이용 여건을 고려할 때, 지하수의 취수량 관리를 위한 지하수위의 실시간 예측이 필요한 실정이다. 하지만 기존의 예측방법에 의한 제주도 지하수위 예측기간은 충분히 길지 않으며 예측기간이 길어지면 예측성능이 낮아지는 문제점이 있었다. 본 연구에서는 이러한 단점을 보완하기 위해 딥러닝 알고리즘인 Long Short Term Memory(LSTM)를 활용하여 제주도 남동쪽 표선유역 중산간지역의 1개 지하수위 관측정에 대해 지하수위를 예측하고 분석하였다. R 기반의 Keras 패키지에 있는 LSTM 알고리즘을 사용하였고, 입력자료는 인근의 성판악 및 교래 강우관측소의 일단위 강수량자료와 인근 취수정의 지하수 취수량자료 및 연구대상 관측정의 지하수위 자료를 사용하였으며, 사용된 자료의 기간은 2001년 2월 11일부터 2019년 10월 31일까지 이다. 2001년부터 13년의 보정 및 3년의 검증용 시계열자료를 사용하여 매개변수의 보정 및 과적합을 방지하였고, 3년의 예측용 시계열자료를 사용하여 LSTM 알고리즘의 예측성능을 평가하였다. 목표 예측일수는 1일, 10일, 20일, 30일로 설정하였으며 보정, 검증 및 예측기간에 대한 모의결과의 평가지수로는 Nash-Sutcliffe Efficiency(NSE)를 활용하였다. 모의결과, 보정, 검증 및 예측기간에 대한 1일 예측의 NSE는 각각 0.997, 0.997, 0.993 이었고, 10일 예측의 NSE는 각각 0.993, 0.912, 0.930 이었다. 20일 예측의 경우 NSE는 각각 0.809, 0.781, 0.809 이었으며 30일 예측의 경우 각각 0.677, 0.622, 0.633 이었다. 이것은 LSTM 알고리즘에 의한 10일 예측까지는 관측 지하수위 시계열자료를 매우 적절히 모의할 수 있다는 것을 의미하며, 20일 예측 또한 적절히 모의할 수 있다는 것을 의미한다. 따라서 LSTM 알고리즘을 활용하면 본 연구대상지점에 대한 2주일 또는 3주일의 안정적인 지하수위 예보가 가능하다고 판단된다. 또한 LSTM 알고리즘을 통한 실시간 지하수위 예측은 지하수 취수량 관리에 활용할 수 있을 것이다.

  • PDF

Prediction of pollution loads in the Geum River upstream using the recurrent neural network algorithm

  • Lim, Heesung;An, Hyunuk;Kim, Haedo;Lee, Jeaju
    • Korean Journal of Agricultural Science
    • /
    • v.46 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • The purpose of this study was to predict the water quality using the RNN (recurrent neutral network) and LSTM (long short-term memory). These are advanced forms of machine learning algorithms that are better suited for time series learning compared to artificial neural networks; however, they have not been investigated before for water quality prediction. Three water quality indexes, the BOD (biochemical oxygen demand), COD (chemical oxygen demand), and SS (suspended solids) are predicted by the RNN and LSTM. TensorFlow, an open source library developed by Google, was used to implement the machine learning algorithm. The Okcheon observation point in the Geum River basin in the Republic of Korea was selected as the target point for the prediction of the water quality. Ten years of daily observed meteorological (daily temperature and daily wind speed) and hydrological (water level and flow discharge) data were used as the inputs, and irregularly observed water quality (BOD, COD, and SS) data were used as the learning materials. The irregularly observed water quality data were converted into daily data with the linear interpolation method. The water quality after one day was predicted by the machine learning algorithm, and it was found that a water quality prediction is possible with high accuracy compared to existing physical modeling results in the prediction of the BOD, COD, and SS, which are very non-linear. The sequence length and iteration were changed to compare the performances of the algorithms.

A Study on Energy Efficiency Plan based on Artificial Intelligence: Focusing on Mixed Research Methodology (인공지능 기반 에너지 효율화 방안 연구: 혼합적 연구방법론 중심으로)

  • Lee, Moonbum;Ma, Taeyoung
    • Journal of Information Technology Services
    • /
    • v.21 no.5
    • /
    • pp.81-94
    • /
    • 2022
  • This study sets the research goal of reducing energy consumption which 'H' University Industry-University Cooperation Foundation and resident companies are concerned with, as well as conducting policy research and data analysis. We tried to present a solution to the problem using the technique. The algorithm showing the greatest reliability in the power of the model for the analysis algorithm of this paper was selected, and the power consumption trend curves per minute and hour were confirmed through predictive analysis while applying the algorithm, as well as confirming the singularity of excessive energy consumption. Through an additional sub-sensor analysis, the singularity of energy consumption of the unit was identified more precisely in the facility rather than in the building unit. Through this, this paper presents a system building model for real-time monitoring of campus power usage, and expands the data center and model for implementation. Furthermore, by presenting the possibility of expanding the field through research on the integration of mobile applications and IoT hardware, this study will provide school authorities and resident companies with specific solutions necessary to continuously solve data-based field problems.

Developing an Artificial Intelligence Algorithm to Predict the Timing of Dialysis Vascular Surgery (투석혈관 수술시기 예측을 위한 인공지능 알고리즘 개발)

  • Kim Dohyoung;Kim Hyunsuk;Lee Sunpyo;Oh Injong;Park Seungbum
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.97-115
    • /
    • 2023
  • In South Korea, chronic kidney disease(CKD) impacts around 4.6 million adults, leading to a high reliance on hemodialysis. For effective dialysis, vascular access is crucial, with decisions about vascular surgeries often made during dialysis sessions. Anticipating these needs could improve dialysis quality and patient comfort. This study investigates the use of Artificial Intelligence(AI) to predict the timing of surgeries for dialysis vessels, an area not extensively researched. We've developed an AI algorithm using predictive maintenance methods, transitioning from machine learning to a more advanced deep learning approach with Long Short-Term Memory(LSTM) models. The algorithm processes variables such as venous pressure, blood flow, and patient age, demonstrating high effectiveness with metrics exceeding 0.91. By shortening the data collection intervals, a more refined model can be obtained. Implementing this AI in clinical practice could notably enhance patient experience and the quality of medical services in dialysis, marking a significant advancement in the treatment of CKD.

EEG Dimensional Reduction with Stack AutoEncoder for Emotional Recognition using LSTM/RNN (LSTM/RNN을 사용한 감정인식을 위한 스택 오토 인코더로 EEG 차원 감소)

  • Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.717-724
    • /
    • 2020
  • Due to the important role played by emotion in human interaction, affective computing is dedicated in trying to understand and regulate emotion through human-aware artificial intelligence. By understanding, emotion mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction will be better managed as they are all associated with emotion. Various studies for emotion recognition have been conducted to solve these problems. In applying machine learning for the emotion recognition, the efforts to reduce the complexity of the algorithm and improve the accuracy are required. In this paper, we investigate emotion Electroencephalogram (EEG) feature reduction and classification using Stack AutoEncoder (SAE) and Long-Short-Term-Memory/Recurrent Neural Networks (LSTM/RNN) classification respectively. The proposed method reduced the complexity of the model and significantly enhance the performance of the classifiers.

An Empirical Analysis of Sino-Russia Foreign Trade Turnover Time Series: Based on EMD-LSTM Model

  • GUO, Jian;WU, Kai Kun;YE, Lyu;CHENG, Shi Chao;LIU, Wen Jing;YANG, Jing Ying
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.10
    • /
    • pp.159-168
    • /
    • 2022
  • The time series of foreign trade turnover is complex and variable and contains linear and nonlinear information. This paper proposes preprocessing the dataset by the EMD algorithm and combining the linear prediction advantage of the SARIMA model with the nonlinear prediction advantage of the EMD-LSTM model to construct the SARIMA-EMD-LSTM hybrid model by the weight assignment method. The forecast performance of the single models is compared with that of the hybrid models by using MAPE and RMSE metrics. Furthermore, it is confirmed that the weight assignment approach can benefit from the hybrid models. The results show that the SARIMA model can capture the fluctuation pattern of the time series, but it cannot effectively predict the sudden drop in foreign trade turnover caused by special reasons and has the lowest accuracy in long-term forecasting. The EMD-LSTM model successfully resolves the hysteresis phenomenon and has the highest forecast accuracy of all models, with a MAPE of 7.4304%. Therefore, it can be effectively used to forecast the Sino-Russia foreign trade turnover time series post-epidemic. Hybrid models cannot take advantage of SARIMA linear and LSTM nonlinear forecasting, so weight assignment is not the best method to construct hybrid models.

Analysis of Accuracy and Loss Performance According to Hyperparameter in RNN Model (RNN모델에서 하이퍼파라미터 변화에 따른 정확도와 손실 성능 분석)

  • Kim, Joon-Yong;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.31-38
    • /
    • 2021
  • In this paper, in order to obtain the optimization of the RNN model used for sentiment analysis, the correlation of each model was studied by observing the trend of loss and accuracy according to hyperparameter tuning. As a research method, after configuring the hidden layer with LSTM and the embedding layer that are most optimized to process sequential data, the loss and accuracy of each model were measured by tuning the unit, batch-size, and embedding size of the LSTM. As a result of the measurement, the loss was 41.9% and the accuracy was 11.4%, and the trend of the optimization model showed a consistently stable graph, confirming that the tuning of the hyperparameter had a profound effect on the model. In addition, it was confirmed that the decision of the embedding size among the three hyperparameters had the greatest influence on the model. In the future, this research will be continued, and research on an algorithm that allows the model to directly find the optimal hyperparameter will continue.

Spectogram analysis of active power of appliances and LSTM-based Energy Disaggregation (다수 가전기기 유효전력의 스팩토그램 분석 및 LSTM기반의 전력 분해 알고리즘)

  • Kim, Imgyu;Kim, Hyuncheol;Kim, Seung Yun;Shin, Sangyong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.21-28
    • /
    • 2021
  • In this study, we propose a deep learning-based NILM technique using actual measured power data for 5 kinds of home appliances and verify its effectiveness. For about 3 weeks, the active power of the central power measuring device and five kinds of home appliances (refrigerator, induction, TV, washing machine, air cleaner) was individually measured. The preprocessing method of the measured data was introduced, and characteristics of each household appliance were analyzed through spectogram analysis. The characteristics of each household appliance are organized into a learning data set. All the power data measured by the central power measuring device and 5 kinds of home appliances were time-series mapping, and training was performed using a LSTM neural network, which is excellent for time series data prediction. An algorithm that can disaggregate five types of energies using only the power data of the main central power measuring device is proposed.

Machine Learning Assisted Information Search in Streaming Video (기계학습을 이용한 동영상 서비스의 검색 편의성 향상)

  • Lim, Yeon-sup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.361-367
    • /
    • 2021
  • Information search in video streaming services such as YouTube is replacing traditional information search services. To find desired detailed information in such a video, users should repeatedly navigate several points in the video, resulting in a waste of time and network traffic. In this paper, we propose a method to assist users in searching for information in a video by using DBSCAN clustering and LSTM. Our LSTM model is trained with a dataset that consists of user search sequences and their final target points categorized by DBSCAN clustering algorithm. Then, our proposed method utilizes the trained model to suggest an expected category for the user's desired target point based on a partial search sequence that can be collected at the beginning of the search. Our experiment results show that the proposed method successfully finds user destination points with 98% accuracy and 7s of the time difference by average.