• Title/Summary/Keyword: LSTM 신경망

Search Result 229, Processing Time 0.032 seconds

A Study on LSTM Learning for Detecting Anomalous Trajectories of Protected Individuals by using GPS (신변보호자 경로이탈 감지를 위한 GPS 기반 LSTM 학습 연구 )

  • Jihyoung Kim;Jaehyun Yoo
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.633-634
    • /
    • 2024
  • 본 연구는 LSTM 모델이 수용 가능한 익명 보행자의 GPS 경로 범위와 훈련 데이터 셋의 크기에 대한 양상 분석을 목적으로 한다. 시계열 데이터인 GPS 경로 그리고 순환 신경망 LSTM 과 입력 구조를 이해하고, 두 가지 실험을 설계하여 LSTM 의 훈련 데이터 셋 수용을 파악한다. 실험에서는 장거리 데이터 셋을 학습한 모델과 그렇지 않은 모델을 비교하고, 훈련 데이터 셋 크기에 따른 학습 모델의 예측 값을 비교한다. 두 실험을 통해 GPS 경로 범위와 학습 가능한 경로의 가짓수에 대한 비교 분석 결과를 제시한다.

Prediction of the DO concentration using the RNN-LSTM algorithm in Oncheoncheon basin, Busan, Republic of Korea (부산광역시 온천천 유역의 RNN-LSTM 알고리즘을 이용한 DO농도 예측)

  • Lim, Heesung;An, Hyunuk
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.86-86
    • /
    • 2021
  • 온천천은 부산광역시 금정구, 동래구, 연제구를 흐르는 도심 하천으로 부산 시민들의 도심 속 산책길, 자전거 길 등으로 활용되는 도시하천이다. 그러나 온천천 양안의 동래 곡저 평야가 시가지화 되고 온천천 발원지인 금정산 주변에서 무허가 상수도를 사용하고 각종 쓰레기와 하수의 유입으로 인해 하천 전체가 하수관으로 변해왔다. 이에 따라 부산광역시는 온천천 정비 계획을 시행하여 하천 정비와 함께 자동측정망을 설치하여 하천의 DO (dissolved oxygen), 탁도, TDS농도 등 자료를 수집하고 있다. 그러나 자동측정망으로 쌓여가는 데이터를 활용하여 DO농도 예측은 거의 이뤄지지 않고 있다. DO는 하천의 수질 오염 정도를 판단하는 수질인자로 역사적으로 하천 연구의 주요 연구 대상이 되어 왔다. 본 연구에서는 일 자료 뿐만 아니라 시 자료를 기반으로 RNN-LSTM 알고리즘을 활용한 DO예측을 시도하였다. RNN-LSTM은 시계열 학습에 뛰어난 알고리즘으로 인공신경망의 발전된 형태인 순환신경망이다. 연구에 앞서 부산광역시 보건환경정보 공개시스템으로부터 받은 자료 중에서 교정, 보수 중, 비사용, 장비전원단절 등으로 인해 누락데이터를 2014년 1월 1일부터 2018년 12월 31일의 데이터 전수조사 후 이상데이터를 확인하여 선형 보간하여 데이터를 사용하였다. 연구에서는 Google에서 개발한 딥러닝 오픈소스 라이브러리인 텐서플로우를 활용하여 부산광역시 금정구 부곡동에 위치한 부곡교 관측소의 DO농도를 시간 또는 일 예측을 하였다. 일 예측 학습에는 2014년~ 2018년의 기상자료(기온, 상대습도, 풍속, 강수량), DO농도 자료를 사용하였고, 시 예측 학습에는 연속된 자료가 가장 많은 2015년 3월 ~ 12월까지의 데이터를 활용하여 연구를 진행하였다. 모형의 검증을 위해 결정계수(R square)를 이용하여 통계분석을 실시하였다.

  • PDF

An LSTM Neural Network Model for Forecasting Daily Peak Electric Load of EV Charging Stations (EV 충전소의 일별 최대전력부하 예측을 위한 LSTM 신경망 모델)

  • Lee, Haesung;Lee, Byungsung;Ahn, Hyun
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.119-127
    • /
    • 2020
  • As the electric vehicle (EV) market in South Korea grows, it is required to expand charging facilities to respond to rapidly increasing EV charging demand. In order to conduct a comprehensive facility planning, it is necessary to forecast future demand for electricity and systematically analyze the impact on the load capacity of facilities based on this. In this paper, we design and develop a Long Short-Term Memory (LSTM) neural network model that predicts the daily peak electric load at each charging station using the EV charging data of KEPCO. First, we obtain refined data through data preprocessing and outlier removal. Next, our model is trained by extracting daily features per charging station and constructing a training set. Finally, our model is verified through performance analysis using a test set for each charging station type, and the limitations of our model are discussed.

Applying a Novel Neuroscience Mining (NSM) Method to fNIRS Dataset for Predicting the Business Problem Solving Creativity: Emphasis on Combining CNN, BiLSTM, and Attention Network

  • Kim, Kyu Sung;Kim, Min Gyeong;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.1-7
    • /
    • 2022
  • With the development of artificial intelligence, efforts to incorporate neuroscience mining with AI have increased. Neuroscience mining, also known as NSM, expands on this concept by combining computational neuroscience and business analytics. Using fNIRS (functional near-infrared spectroscopy)-based experiment dataset, we have investigated the potential of NSM in the context of the BPSC (business problem-solving creativity) prediction. Although BPSC is regarded as an essential business differentiator and a difficult cognitive resource to imitate, measuring it is a challenging task. In the context of NSM, appropriate methods for assessing and predicting BPSC are still in their infancy. In this sense, we propose a novel NSM method that systematically combines CNN, BiLSTM, and attention network for the sake of enhancing the BPSC prediction performance significantly. We utilized a dataset containing over 150 thousand fNIRS-measured data points to evaluate the validity of our proposed NSM method. Empirical evidence demonstrates that the proposed NSM method reveals the most robust performance when compared to benchmarking methods.

A Study on Estimating Geomagnetic Azimuth using LSTM (LSTM을 이용한 지자기 방위각 추정 기술 연구)

  • Oh, Jongtaek;Kim, Sunghoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.137-141
    • /
    • 2022
  • The method of estimating the azimuth by measuring the geomagnetism has been used for a very long time. However, there are many cases where an error occurs in the estimated azimuth due to disturbances in the earth's magnetic field due to metal structures inside and outside the room. Although many studies have been conducted to correct this, there is a limit to reducing the error. In this paper, we propose a method of estimating the azimuth by applying the measured geomagnetic sensor data to the neural network of the LSTM structure. Data preprocessing is very important for learning a neural network. In this paper, data is collected using the built-in acceleration sensor, gyro sensor, and geomagnetic sensor in the smartphone, and the geomagnetic sensor data is uniformly sampled using EKF. As a result, an average azimuth estimation error of 0.9 degrees was obtained using four hidden layers.

An Improved CNN-LSTM Hybrid Model for Predicting UAV Flight State (무인항공기 비행 상태 예측을 위한 개선된 CNN-LSTM 혼합모델)

  • Hyun Woo Seo;Eun Ju Choi;Byoung Soo Kim;Yong Ho Moon
    • Journal of Aerospace System Engineering
    • /
    • v.18 no.3
    • /
    • pp.48-55
    • /
    • 2024
  • In recent years, as the commercialization of unmanned aerial vehicles (UAVs) has been actively promoted, much attention has been focused on developing a technology to ensure the safety of UAVs. In general, the UAV has the potential to enter an uncontrollable state caused by sudden maneuvers, disturbances, and pilot error. To prevent entering an uncontrolled situation, it is essential to predict the flight state of the UAV. In this paper, we propose a flight state prediction technique based on an improved CNN-LSTM hybrid mode to enhance the flight state prediction performance. Simulation results show that the proposed prediction technique offers better state prediction performance than the existing prediction technique, and can be operated in real-time in an on-board environment.

Development of new artificial neural network optimizer to improve water quality index prediction performance (수질 지수 예측성능 향상을 위한 새로운 인공신경망 옵티마이저의 개발)

  • Ryu, Yong Min;Kim, Young Nam;Lee, Dae Won;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.73-85
    • /
    • 2024
  • Predicting water quality of rivers and reservoirs is necessary for the management of water resources. Artificial Neural Networks (ANNs) have been used in many studies to predict water quality with high accuracy. Previous studies have used Gradient Descent (GD)-based optimizers as an optimizer, an operator of ANN that searches parameters. However, GD-based optimizers have the disadvantages of the possibility of local optimal convergence and absence of a solution storage and comparison structure. This study developed improved optimizers to overcome the disadvantages of GD-based optimizers. Proposed optimizers are optimizers that combine adaptive moments (Adam) and Nesterov-accelerated adaptive moments (Nadam), which have low learning errors among GD-based optimizers, with Harmony Search (HS) or Novel Self-adaptive Harmony Search (NSHS). To evaluate the performance of Long Short-Term Memory (LSTM) using improved optimizers, the water quality data from the Dasan water quality monitoring station were used for training and prediction. Comparing the learning results, Mean Squared Error (MSE) of LSTM using Nadam combined with NSHS (NadamNSHS) was the lowest at 0.002921. In addition, the prediction rankings according to MSE and R2 for the four water quality indices for each optimizer were compared. Comparing the average of ranking for each optimizer, it was confirmed that LSTM using NadamNSHS was the highest at 2.25.

Performance Evaluation of Recurrent Neural Network Algorithms for Recommendation System in E-commerce (전자상거래 추천시스템을 위한 순환신경망 알고리즘들의 성능평가)

  • Seo, Jihye;Yong, Hwan-Seung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.440-445
    • /
    • 2017
  • Due to the advance of e-commerce systems, the number of people using online shopping and products has significantly increased. Therefore, the need for an accurate recommendation system is becoming increasingly more important. Recurrent neural network is a deep-learning algorithm that utilizes sequential information in training. In this paper, an evaluation is performed on the application of recurrent neural networks to recommendation systems. We evaluated three recurrent algorithms (RNN, LSTM and GRU) and three optimal algorithms(Adagrad, RMSProp and Adam) which are commonly used. In the experiments, we used the TensorFlow open source library produced by Google and e-commerce session data from RecSys Challenge 2015. The results using the optimal hyperparameters found in this study are compared with those of RecSys Challenge 2015 participants.

LSTM Language Model Based Korean Sentence Generation (LSTM 언어모델 기반 한국어 문장 생성)

  • Kim, Yang-hoon;Hwang, Yong-keun;Kang, Tae-gwan;Jung, Kyo-min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.5
    • /
    • pp.592-601
    • /
    • 2016
  • The recurrent neural network (RNN) is a deep learning model which is suitable to sequential or length-variable data. The Long Short-Term Memory (LSTM) mitigates the vanishing gradient problem of RNNs so that LSTM can maintain the long-term dependency among the constituents of the given input sequence. In this paper, we propose a LSTM based language model which can predict following words of a given incomplete sentence to generate a complete sentence. To evaluate our method, we trained our model using multiple Korean corpora then generated the incomplete part of Korean sentences. The result shows that our language model was able to generate the fluent Korean sentences. We also show that the word based model generated better sentences compared to the other settings.

A statistical journey to DNN, the second trip: Architecture of RNN and image classification (심층신경망으로 가는 통계 여행, 두 번째 여행: RNN의 구조와 이미지 분류)

  • Hee Ju Kim;Yu Jin Kim;Kisuk Jang;Yoon Dong Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.5
    • /
    • pp.553-565
    • /
    • 2024
  • RNNs are models that play a pivotal role in understanding various forms of DNNs. They have evolved into Seq2Seq models and subsequently into Transformers, leading to the development of large language models (LLMs) that are currently the focus of significant interest. Nonetheless, understanding the operation of RNNs is not an easy task. In particular, the core models of RNNs, LSTM and GRU, are challenging to comprehend due to their structural complexity. This paper explores ways to understand the operation of LSTM and GRU. Additionally, to demonstrate specific use cases of LSTM and GRU, we applied them to the problem of handwritten digit classification using the MNIST dataset. We utilized a method of segmenting each image into multiple patches and applied bidirectional LSTM and bidirectional GRU. The results were then compared with those of CNN.