• Title/Summary/Keyword: LSTM (Long-Short Term Memory)

Search Result 530, Processing Time 0.029 seconds

Development of Data Visualized Web System for Virtual Power Forecasting based on Open Sources based Location Services using Deep Learning (오픈소스 기반 지도 서비스를 이용한 딥러닝 실시간 가상 전력수요 예측 가시화 웹 시스템)

  • Lee, JeongHwi;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.8
    • /
    • pp.1005-1012
    • /
    • 2021
  • Recently, the use of various location-based services-based location information systems using maps on the web has been expanding, and there is a need for a monitoring system that can check power demand in real time as an alternative to energy saving. In this study, we developed a deep learning real-time virtual power demand prediction web system using open source-based mapping service to analyze and predict the characteristics of power demand data using deep learning. In particular, the proposed system uses the LSTM(Long Short-Term Memory) deep learning model to enable power demand and predictive analysis locally, and provides visualization of analyzed information. Future proposed systems will not only be utilized to identify and analyze the supply and demand and forecast status of energy by region, but also apply to other industrial energies.

Personal Driving Style based ADAS Customization using Machine Learning for Public Driving Safety

  • Giyoung Hwang;Dongjun Jung;Yunyeong Goh;Jong-Moon Chung
    • Journal of Internet Computing and Services
    • /
    • v.24 no.1
    • /
    • pp.39-47
    • /
    • 2023
  • The development of autonomous driving and Advanced Driver Assistance System (ADAS) technology has grown rapidly in recent years. As most traffic accidents occur due to human error, self-driving vehicles can drastically reduce the number of accidents and crashes that occur on the roads today. Obviously, technical advancements in autonomous driving can lead to improved public driving safety. However, due to the current limitations in technology and lack of public trust in self-driving cars (and drones), the actual use of Autonomous Vehicles (AVs) is still significantly low. According to prior studies, people's acceptance of an AV is mainly determined by trust. It is proven that people still feel much more comfortable in personalized ADAS, designed with the way people drive. Based on such needs, a new attempt for a customized ADAS considering each driver's driving style is proposed in this paper. Each driver's behavior is divided into two categories: assertive and defensive. In this paper, a novel customized ADAS algorithm with high classification accuracy is designed, which divides each driver based on their driving style. Each driver's driving data is collected and simulated using CARLA, which is an open-source autonomous driving simulator. In addition, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) machine learning algorithms are used to optimize the ADAS parameters. The proposed scheme results in a high classification accuracy of time series driving data. Furthermore, among the vast amount of CARLA-based feature data extracted from the drivers, distinguishable driving features are collected selectively using Support Vector Machine (SVM) technology by comparing the amount of influence on the classification of the two categories. Therefore, by extracting distinguishable features and eliminating outliers using SVM, the classification accuracy is significantly improved. Based on this classification, the ADAS sensors can be made more sensitive for the case of assertive drivers, enabling more advanced driving safety support. The proposed technology of this paper is especially important because currently, the state-of-the-art level of autonomous driving is at level 3 (based on the SAE International driving automation standards), which requires advanced functions that can assist drivers using ADAS technology.

Comparative study of meteorological data for river level prediction model (하천 수위 예측 모델을 위한 기상 데이터 비교 연구)

  • Cho, Minwoo;Yoon, Jinwook;Kim, Changsu;Jung, Heokyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.491-493
    • /
    • 2022
  • Flood damage due to torrential rains and typhoons is occurring in many parts of the world. In this paper, we propose a water level prediction model using water level, precipitation, and humidity data, which are key parameters for flood prediction, as input data. Based on the LSTM and GRU models, which have already proven time-series data prediction performance in many research fields, different input datasets were constructed using the ASOS(Automated Synoptic Observing System) data and AWS(Automatic Weather System) data provided by the Korea Meteorological Administration, and performance comparison experiments were conducted. As a result, the best results were obtained when using ASOS data. Through this paper, a performance comparison experiment was conducted according to the input data, and as a future study, it is thought that it can be used as an initial study to develop a system that can make an evacuation decision in advance in connection with the flood risk determination model.

  • PDF

Estimation of reaction forces at the seabed anchor of the submerged floating tunnel using structural pattern recognition

  • Seongi Min;Kiwon Jeong;Yunwoo Lee;Donghwi Jung;Seungjun Kim
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.405-417
    • /
    • 2023
  • The submerged floating tunnel (SFT) is tethered by mooring lines anchored to the seabed, therefore, the structural integrity of the anchor should be sensitively managed. Despite their importance, reaction forces cannot be simply measured by attaching sensors or load cells because of the structural and environmental characteristics of the submerged structure. Therefore, we propose an effective method for estimating the reaction forces at the seabed anchor of a submerged floating tunnel using a structural pattern model. First, a structural pattern model is established to use the correlation between tunnel motion and anchor reactions via a deep learning algorithm. Once the pattern model is established, it is directly used to estimate the reaction forces by inputting the tunnel motion data, which can be directly measured inside the tunnel. Because the sequential characteristics of responses in the time domain should be considered, the long short-term memory (LSTM) algorithm is mainly used to recognize structural behavioral patterns. Using hydrodynamics-based simulations, big data on the structural behavior of the SFT under various waves were generated, and the prepared datasets were used to validate the proposed method. The simulation-based validation results clearly show that the proposed method can precisely estimate time-series reactions using only acceleration data. In addition to real-time structural health monitoring, the proposed method can be useful for forensics when an unexpected accident or failure is related to the seabed anchors of the SFT.

Force-deformation relationship prediction of bridge piers through stacked LSTM network using fast and slow cyclic tests

  • Omid Yazdanpanah;Minwoo Chang;Minseok Park;Yunbyeong Chae
    • Structural Engineering and Mechanics
    • /
    • v.85 no.4
    • /
    • pp.469-484
    • /
    • 2023
  • A deep recursive bidirectional Cuda Deep Neural Network Long Short Term Memory (Bi-CuDNNLSTM) layer is recruited in this paper to predict the entire force time histories, and the corresponding hysteresis and backbone curves of reinforced concrete (RC) bridge piers using experimental fast and slow cyclic tests. The proposed stacked Bi-CuDNNLSTM layers involve multiple uncertain input variables, including horizontal actuator displacements, vertical actuators axial loads, the effective height of the bridge pier, the moment of inertia, and mass. The functional application programming interface in the Keras Python library is utilized to develop a deep learning model considering all the above various input attributes. To have a robust and reliable prediction, the dataset for both the fast and slow cyclic tests is split into three mutually exclusive subsets of training, validation, and testing (unseen). The whole datasets include 17 RC bridge piers tested experimentally ten for fast and seven for slow cyclic tests. The results bring to light that the mean absolute error, as a loss function, is monotonically decreased to zero for both the training and validation datasets after 5000 epochs, and a high level of correlation is observed between the predicted and the experimentally measured values of the force time histories for all the datasets, more than 90%. It can be concluded that the maximum mean of the normalized error, obtained through Box-Whisker plot and Gaussian distribution of normalized error, associated with unseen data is about 10% and 3% for the fast and slow cyclic tests, respectively. In recapitulation, it brings to an end that the stacked Bi-CuDNNLSTM layer implemented in this study has a myriad of benefits in reducing the time and experimental costs for conducting new fast and slow cyclic tests in the future and results in a fast and accurate insight into hysteretic behavior of bridge piers.

A Study on the Health Index Based on Degradation Patterns in Time Series Data Using ProphetNet Model (ProphetNet 모델을 활용한 시계열 데이터의 열화 패턴 기반 Health Index 연구)

  • Sun-Ju Won;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.123-138
    • /
    • 2023
  • The Fourth Industrial Revolution and sensor technology have led to increased utilization of sensor data. In our modern society, data complexity is rising, and the extraction of valuable information has become crucial with the rapid changes in information technology (IT). Recurrent neural networks (RNN) and long short-term memory (LSTM) models have shown remarkable performance in natural language processing (NLP) and time series prediction. Consequently, there is a strong expectation that models excelling in NLP will also excel in time series prediction. However, current research on Transformer models for time series prediction remains limited. Traditional RNN and LSTM models have demonstrated superior performance compared to Transformers in big data analysis. Nevertheless, with continuous advancements in Transformer models, such as GPT-2 (Generative Pre-trained Transformer 2) and ProphetNet, they have gained attention in the field of time series prediction. This study aims to evaluate the classification performance and interval prediction of remaining useful life (RUL) using an advanced Transformer model. The performance of each model will be utilized to establish a health index (HI) for cutting blades, enabling real-time monitoring of machine health. The results are expected to provide valuable insights for machine monitoring, evaluation, and management, confirming the effectiveness of advanced Transformer models in time series analysis when applied in industrial settings.

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

Predicting Performance of Heavy Industry Firms in Korea with U.S. Trade Policy Data (미국 무역정책 변화가 국내 중공업 기업의 경영성과에 미치는 영향)

  • Park, Jinsoo;Kim, Kyoungho;Kim, Buomsoo;Suh, Jihae
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.4
    • /
    • pp.71-101
    • /
    • 2017
  • Since late 2016, protectionism has been a major trend in world trade with the Great Britain exiting the European Union and the United States electing Donald Trump as the 45th president. Consequently, there has been a huge public outcry regarding the negative prospects of heavy industry firms in Korea, which are highly dependent upon international trade with Western countries including the United States. In light of such trend and concerns, we have tried to predict business performance of heavy industry firms in Korea with data regarding trade policy of the United States. United States International Trade Commission (USITC) levies countervailing duties and anti-dumping duties to firms that violate its fair-trade regulations. In this study, we have performed data analysis with past records of countervailing duties and anti-dumping duties. With results from clustering analysis, it could be concluded that trade policy trends of the Unites States significantly affects the business performance of heavy industry firms in Korea. Furthermore, we have attempted to quantify such effects by employing long short-term memory (LSTM), a popular neural networks model that is well-suited to deal with sequential data. Our major contribution is that we have succeeded in empirically validating the intuitive argument and also predicting the future trend with rigorous data mining techniques. With some improvements, our results are expected to be highly relevant to designing regulations regarding heavy industry in Korea.

Sleep Deprivation Attack Detection Based on Clustering in Wireless Sensor Network (무선 센서 네트워크에서 클러스터링 기반 Sleep Deprivation Attack 탐지 모델)

  • Kim, Suk-young;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.1
    • /
    • pp.83-97
    • /
    • 2021
  • Wireless sensors that make up the Wireless Sensor Network generally have extremely limited power and resources. The wireless sensor enters the sleep state at a certain interval to conserve power. The Sleep deflation attack is a deadly attack that consumes power by preventing wireless sensors from entering the sleep state, but there is no clear countermeasure. Thus, in this paper, using clustering-based binary search tree structure, the Sleep deprivation attack detection model is proposed. The model proposed in this paper utilizes one of the characteristics of both attack sensor nodes and normal sensor nodes which were classified using machine learning. The characteristics used for detection were determined using Long Short-Term Memory, Decision Tree, Support Vector Machine, and K-Nearest Neighbor. Thresholds for judging attack sensor nodes were then learned by applying the SVM. The determined features were used in the proposed algorithm to calculate the values for attack detection, and the threshold for determining the calculated values was derived by applying SVM.Through experiments, the detection model proposed showed a detection rate of 94% when 35% of the total sensor nodes were attack sensor nodes and improvement of up to 26% in power retention.