• Title/Summary/Keyword: Long-term memory

Search Result 794, Processing Time 0.029 seconds

Prediction of Groundwater Level in Jeju Island Using Deep Learning Algorithm MLP and LSTM (딥러닝 알고리즘 MLP 및 LSTM을 활용한 제주도 지하수위 예측)

  • Kang, Dayoung;Byun, Kyuhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.206-206
    • /
    • 2022
  • 제주도는 투수성이 좋은 대수층이 발달한 화산섬으로 지하수가 가장 중요한 수자원이다. 인위적 요인과 기후변화로 인해 제주도의 지하수위가 저하하는 추세를 보이고 있음에 따라 지하수의 적정 관리를 위해 지하수위의 정확하고 장기적인 예측이 매우 중요하다. 다양한 환경적인 요인이 지하수의 함양 및 수위에 영향을 미치는 것으로 알려져 있지만, 제주도의 특징적인 기상인자가 지하수 시스템에 어떻게 영향을 미치는지를 파악하기 위한 연구는 거의 진행되지 않았다. 지하수위측에 있어서 물리적 모델을 이용한 방안은 다양한 조건에 의해 변화하는 지하수위의 정확하고 빠른 예측에 한계가 있는 것으로 알려져 있다. 이에 본 연구에서는 제주도 애월읍과 남원읍에 위치한 지하수위 관측정의 일 수위자료와 강수량, 온도, 강설량, 풍속, VPD의 다양한 기상 자료를 대상으로 인공신경망 알고리즘인 다층 퍼셉트론(MLP)와 Long Short Term Memory(LSTM)에 기반한 표준지하수지수(SGI) 예측 모델을 개발하였다. MLP와 LSTM의 표준지하수지수(SGI) 예측결과가 상당히 유사한 것으로 나타났으며 MLP과 LSTM 예측모델의 결정계수(R2)는 애월읍의 경우 각각 0.98, 남원읍의 경우 각각 0.96으로 높은 값을 보였다. 본 연구에서 개발한 지하수위 예측모델을 통해 효율적인 운영과 정밀한 지하수위 예측이 가능해질 것이며 기후변화 대응을 위한 지속가능한 지하수자원 관리 방안 마련에 도움을 줄 것이라 판단된다.

  • PDF

Future inflow projection based on Bayesian optimization for hyper-parameters (하이퍼매개변수 베이지안 최적화 기법을 적용한 미래 유입량 예측)

  • Tran, Trung Duc;Kim, Jongho
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.347-347
    • /
    • 2022
  • 최근 데이터 사이언스의 비약적인 발전과 함께 다양한 형태의 딥러닝 알고리즘이 개발되어 수자원 분야에도 적용되고 있다. 이 연구에서는 LSTM(Long Short-Term Memory) 네트워크와 BO-LSTM이라는 베이지안 최적화(BO) 기술을 결합하여 일단위 앙상블 미래 댐유입량을 projection하는 딥 러닝 모델을 제안하였다. BO-LSTM 하이퍼파라미터 및 손실 함수는 베이지안 최적화 기법을 통해 훈련 및 최적화되며, BO 접근법은 모델의 하이퍼파라미터와 손실 함수를 높은 정확도로 빠르게 최적화할 수 있었다(R=0.92 및 NSE=0.85). 또한 미래 댐 유입량을 예측하기 위한 LSTM의 구조는 Forecasting 모형과 Proiection 모형으로 구분하여 두 모형의 장단점을 분석하였으며, 본 연구의 결과로부터 데이터 처리 단계가 모델 훈련의 효율성을 높이고 노이즈를 줄이는 데 효과적이고 미래 예측에 있어 LSTM 구조에 따른 영향을 확인할 수 있었다. 본 연구는 소양강 유역, 2020-2100년 기간 동안의 미래 예측에 적용되었다. 전반적으로, CIMIP6 데이터에 따르면 10%에서 50%의 미래 유입량 증가가 발생하는 것으로 확인되었으며, 이는 미래 강수량의 증가의 폭과 유사함을 확인하였다. 유입량 산정에 있어 신뢰할 수 있는 예측은 저수지 운영, 계획 및 관리에 있어 정책 입안자와 운영자에게 도움이 될 것입니다.

  • PDF

LSTM model predictions of inflow considering climate change and climate variability (기후변화 및 기후변동성을 고려한 LSTM 모형 기반 유입량 예측)

  • Kwon, jihwan;Kim, Jongho
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.348-348
    • /
    • 2022
  • 미래에 대한 기후는 과거와 비교하여 변동성이 더 크고 불확실성 또한 더 크기 때문에 미래의 기후변화를 예측하기 위해서는 기후변화의 절대적인 크기뿐 아니라 불확실한 정도도 함께 고려되어야 한다. 본 연구에서는 CMIP6(Coupled Model Intercomparison Project Phase 6) DB에서 제공된 일 단위 18개의 GCMs(General Circulation Models)의 결과를 분석하였으며 또한 3개의SSP(Shared Socioeconomic Pathway)시나리오와 3개의 미래 구간에 대하여 100개의 앙상블을 각각 생성하였다. 불확실성을 초래하는 원인을 3가지로 구분하고, 각각의 원인에 대한 불확실성의 정도를 앙상블 시나리오에 반영하고자 한다. 현재 기간 및 미래 기간에 대해 100개의 20년 시계열 날씨변수 앙상블을 생성하여 LSTM(Long short-term memory)의 입력자료로 사용하여 댐유입량, 저수위, 방류량을 산정하였다. 댐 유입량 및 방류량의 예측성능을 향상시키기 위해 Input predictor의 종류를 선정하는 방법과 그 변수들의 lag time을 결정하는 방법, 입력자료들을 재구성하는 방법, 하이퍼 매개변수를 효율적으로 최적화하는 방법, 목적함수 설정 방법들을 제시하여 댐 유입량 및 방류량의 예측을 크게 향상시키고자 하였다. 본 연구에서 예측된 미래의 댐유입량 및 방류량 정보는 홍수 또는 가뭄 등 다양한 수자원 관련 문제의 전략을 수립하는 데 있어서 적절한 도움이 될 것이다.

  • PDF

Comparative Analysis of Baseflow Separation using Conventional and Deep Learning Techniques

  • Yusuff, Kareem Kola;Shiksa, Bastola;Park, Kidoo;Jung, Younghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.149-149
    • /
    • 2022
  • Accurate quantitative evaluation of baseflow contribution to streamflow is imperative to address seasonal drought vulnerability, flood occurrence and groundwater management concerns for efficient and sustainable water resources management in watersheds. Several baseflow separation algorithms using recursive filters, graphical method and tracer or chemical balance have been developed but resulting baseflow outputs always show wide variations, thereby making it hard to determine best separation technique. Therefore, the current global shift towards implementation of artificial intelligence (AI) in water resources is employed to compare the performance of deep learning models with conventional hydrograph separation techniques to quantify baseflow contribution to streamflow of Piney River watershed, Tennessee from 2001-2021. Streamflow values are obtained from the USGS station 03602500 and modeled to generate values of Baseflow Index (BI) using Web-based Hydrograph Analysis (WHAT) model. Annual and seasonal baseflow outputs from the traditional separation techniques are compared with results of Long Short Term Memory (LSTM) and simple Gated Recurrent Unit (GRU) models. The GRU model gave optimal BFI values during the four seasons with average NSE = 0.98, KGE = 0.97, r = 0.89 and future baseflow volumes are predicted. AI offers easier and more accurate approach to groundwater management and surface runoff modeling to create effective water policy frameworks for disaster management.

  • PDF

Structural reliability analysis using temporal deep learning-based model and importance sampling

  • Nguyen, Truong-Thang;Dang, Viet-Hung
    • Structural Engineering and Mechanics
    • /
    • v.84 no.3
    • /
    • pp.323-335
    • /
    • 2022
  • The main idea of the framework is to seamlessly combine a reasonably accurate and fast surrogate model with the importance sampling strategy. Developing a surrogate model for predicting structures' dynamic responses is challenging because it involves high-dimensional inputs and outputs. For this purpose, a novel surrogate model based on cutting-edge deep learning architectures specialized for capturing temporal relationships within time-series data, namely Long-Short term memory layer and Transformer layer, is designed. After being properly trained, the surrogate model could be utilized in place of the finite element method to evaluate structures' responses without requiring any specialized software. On the other hand, the importance sampling is adopted to reduce the number of calculations required when computing the failure probability by drawing more relevant samples near critical areas. Thanks to the portability of the trained surrogate model, one can integrate the latter with the Importance sampling in a straightforward fashion, forming an efficient framework called TTIS, which represents double advantages: less number of calculations is needed, and the computational time of each calculation is significantly reduced. The proposed approach's applicability and efficiency are demonstrated through three examples with increasing complexity, involving a 1D beam, a 2D frame, and a 3D building structure. The results show that compared to the conventional Monte Carlo simulation, the proposed method can provide highly similar reliability results with a reduction of up to four orders of magnitudes in time complexity.

Anomaly detection of smart metering system for power management with battery storage system/electric vehicle

  • Sangkeum Lee;Sarvar Hussain Nengroo;Hojun Jin;Yoonmee Doh;Chungho Lee;Taewook Heo;Dongsoo Har
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.650-665
    • /
    • 2023
  • A novel smart metering technique capable of anomaly detection was proposed for real-time home power management system. Smart meter data generated in real-time were obtained from 900 households of single apartments. To detect outliers and missing values in smart meter data, a deep learning model, the autoencoder, consisting of a graph convolutional network and bidirectional long short-term memory network, was applied to the smart metering technique. Power management based on the smart metering technique was executed by multi-objective optimization in the presence of a battery storage system and an electric vehicle. The results of the power management employing the proposed smart metering technique indicate a reduction in electricity cost and amount of power supplied by the grid compared to the results of power management without anomaly detection.

Indoor Environment Drone Detection through DBSCAN and Deep Learning

  • Ha Tran Thi;Hien Pham The;Yun-Seok Mun;Ic-Pyo Hong
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.439-449
    • /
    • 2023
  • In an era marked by the increasing use of drones and the growing demand for indoor surveillance, the development of a robust application for detecting and tracking both drones and humans within indoor spaces becomes imperative. This study presents an innovative application that uses FMCW radar to detect human and drone motions from the cloud point. At the outset, the DBSCAN (Density-based Spatial Clustering of Applications with Noise) algorithm is utilized to categorize cloud points into distinct groups, each representing the objects present in the tracking area. Notably, this algorithm demonstrates remarkable efficiency, particularly in clustering drone point clouds, achieving an impressive accuracy of up to 92.8%. Subsequently, the clusters are discerned and classified into either humans or drones by employing a deep learning model. A trio of models, including Deep Neural Network (DNN), Residual Network (ResNet), and Long Short-Term Memory (LSTM), are applied, and the outcomes reveal that the ResNet model achieves the highest accuracy. It attains an impressive 98.62% accuracy for identifying drone clusters and a noteworthy 96.75% accuracy for human clusters.

Predicting Oxynitrification layer using AI-based Varying Coefficient Regression model (AI 기반의 Varying Coefficient Regression 모델을 이용한 산질화층 예측)

  • Hye Jung Park;Joo Yong Shim;Kyong Jun An;Chang Ha Hwang;Je Hyun Han
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.36 no.6
    • /
    • pp.374-381
    • /
    • 2023
  • This study develops and evaluates a deep learning model for predicting oxide and nitride layers based on plasma process data. We introduce a novel deep learning-based Varying Coefficient Regressor (VCR) by adapting the VCR, which previously relied on an existing unique function. This model is employed to forecast the oxide and nitride layers within the plasma. Through comparative experiments, the proposed VCR-based model exhibits superior performance compared to Long Short-Term Memory, Random Forest, and other methods, showcasing its excellence in predicting time series data. This study indicates the potential for advancing prediction models through deep learning in the domain of plasma processing and highlights its application prospects in industrial settings.

Learning-based Inertial-wheel Odometry for a Mobile Robot (모바일 로봇을 위한 학습 기반 관성-바퀴 오도메트리)

  • Myeongsoo Kim;Keunwoo Jang;Jaeheung Park
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.427-435
    • /
    • 2023
  • This paper proposes a method of estimating the pose of a mobile robot by using a learning model. When estimating the pose of a mobile robot, wheel encoder and inertial measurement unit (IMU) data are generally utilized. However, depending on the condition of the ground surface, slip occurs due to interaction between the wheel and the floor. In this case, it is hard to predict pose accurately by using only encoder and IMU. Thus, in order to reduce pose error even in such conditions, this paper introduces a pose estimation method based on a learning model using data of the wheel encoder and IMU. As the learning model, long short-term memory (LSTM) network is adopted. The inputs to LSTM are velocity and acceleration data from the wheel encoder and IMU. Outputs from network are corrected linear and angular velocity. Estimated pose is calculated through numerically integrating output velocities. Dataset used as ground truth of learning model is collected in various ground conditions. Experimental results demonstrate that proposed learning model has higher accuracy of pose estimation than extended Kalman filter (EKF) and other learning models using the same data under various ground conditions.

A Network Intrusion Security Detection Method Using BiLSTM-CNN in Big Data Environment

  • Hong Wang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.688-701
    • /
    • 2023
  • The conventional methods of network intrusion detection system (NIDS) cannot measure the trend of intrusiondetection targets effectively, which lead to low detection accuracy. In this study, a NIDS method which based on a deep neural network in a big-data environment is proposed. Firstly, the entire framework of the NIDS model is constructed in two stages. Feature reduction and anomaly probability output are used at the core of the two stages. Subsequently, a convolutional neural network, which encompasses a down sampling layer and a characteristic extractor consist of a convolution layer, the correlation of inputs is realized by introducing bidirectional long short-term memory. Finally, after the convolution layer, a pooling layer is added to sample the required features according to different sampling rules, which promotes the overall performance of the NIDS model. The proposed NIDS method and three other methods are compared, and it is broken down under the conditions of the two databases through simulation experiments. The results demonstrate that the proposed model is superior to the other three methods of NIDS in two databases, in terms of precision, accuracy, F1- score, and recall, which are 91.64%, 93.35%, 92.25%, and 91.87%, respectively. The proposed algorithm is significant for improving the accuracy of NIDS.