• Title/Summary/Keyword: Short-Term Memory

Search Result 756, Processing Time 0.036 seconds

A deep learning method for the automatic modulation recognition of received radio signals (수신된 전파신호의 자동 변조 인식을 위한 딥러닝 방법론)

  • Kim, Hanjin;Kim, Hyeockjin;Je, Junho;Kim, Kyungsup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1275-1281
    • /
    • 2019
  • The automatic modulation recognition of a radio signal is a major task of an intelligent receiver, with various civilian and military applications. In this paper, we propose a method to recognize the modulation of radio signals in wireless communication based on the deep neural network. We classify the modulation pattern of radio signal by using the LSTM model, which can catch the long-term pattern for the sequential data as the input data of the deep neural network. The amplitude and phase of the modulated signal, the in-phase carrier, and the quadrature-phase carrier are used as input data in the LSTM model. In order to verify the performance of the proposed learning method, we use a large dataset for training and test, including the ten types of modulation signal under various signal-to-noise ratios.

Long term discharge simulation using an Long Short-Term Memory(LSTM) and Multi Layer Perceptron(MLP) artificial neural networks: Forecasting on Oshipcheon watershed in Samcheok (장단기 메모리(LSTM) 및 다층퍼셉트론(MLP) 인공신경망 앙상블을 이용한 장기 강우유출모의: 삼척 오십천 유역을 대상으로)

  • Sung Wook An;Byng Sik Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.206-206
    • /
    • 2023
  • 지구온난화로 인한 기후변화에 따라 평균강수량과 증발량이 증가하며 강우지역 집중화와 강우강도가 높아질 가능성이 크다. 우리나라의 경우 협소한 국토면적과 높은 인구밀도로 기후변동의 영향이 크기 때문에 한반도에 적합한 유역규모의 수자원 예측과 대응방안을 마련해야 한다. 이를 위한 수자원 관리를 위해서는 유역에서 강수량, 유출량, 증발량 등의 장기적인 자료가 필요하며 경험식, 물리적 강우-유출 모형 등이 사용되었고, 최근들어 연구의 확장성과 비 선형성 등을 고려하기 위해 딥러닝등 인공지능 기술들이 접목되고 있다. 본 연구에서는 ASOS(동해, 태백)와 AWS(삼척, 신기, 도계) 5곳의 관측소에서 2011년~2020년까지의 일 단위 기상관측자료를 수집하고 WAMIS에서 같은 기간의 오십천 하구 일 유출량 자료를 수집 후 5개 관측소를 기준으로Thiessen 면적비를 적용해 기상자료를 구축했으며 Angstrom & Hargreaves 공식으로 잠재증발산량 산정해 3개의 모델에 각각 기상자료(일 강수량, 최고기온, 최대 순간 풍속, 최저기온, 평균풍속, 평균기온), 일 강수량과 잠재증발산량, 일 강수량 - 잠재증발산량을 학습 후 관측 유출량과 비교결과 기상자료(일 강수량, 최고기온, 최대 순간 풍속, 최저기온, 평균풍속, 평균기온)로 학습한 모델성능이 가장 높아 최적 모델로 선정했으며 일, 월, 연 관측유출량 시계열과 비교했다. 또한 같은 학습자료를 사용해 다층 퍼셉트론(Multi Layer Perceptron, MLP) 앙상블 모델을 구축하여 수자원 분야에서의 인공지능 활용성을 평가했다.

  • PDF

Development of long-term daily high-resolution gridded meteorological data based on deep learning (딥러닝에 기반한 우리나라 장기간 일 단위 고해상도 격자형 기상자료 생산)

  • Yookyung Jeong;Kyuhyun Byu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.198-198
    • /
    • 2023
  • 유역 내 수자원 계획을 효율적으로 수립하기 위해서는 장기간에 걸친 수문 모델링 뿐만 아니라 미래 기후 시나리오에 따른 수문학적 기후변화 영향 분석도 중요하다. 이를 위해서는 관측 값에 기반한 고품질 및 고해상도 격자형 기상자료 생산이 필수적이다. 하지만, 우리나라는 종관기상관측시스템(ASOS)과 방재기상관측시스템(AWS)으로 이루어진 고밀도 관측 네트워크가 2000년 이후부터 이용 가능했기에 장기간 격자형 기상자료가 부족하다. 이를 보완하고자 본 연구는 가정적인 상황에 기반하여 만약 2000년 이전에도 현재와 동일한 고밀도 관측 네트워크가 존재했다면 산출 가능했을 장기간 일 단위 고해상도 격자형 기상자료를 생산하는 것을 목표로 한다. 구체적으로, 2000년을 기준으로 최근과 과거 기간의 격자형 기상자료를 딥러닝 알고리즘으로 모델링하여 과거 기간을 대상으로 기상자료(일 단위 기온, 강수량)의 공간적 변동성 및 특성을 재구성한다. 격자형 기상자료의 생산을 위해 우리나라의 고도에 기반하여 기상 인자들의 영향을 정량화 하는 보간법인 K-PRISM을 적용하여 고밀도 및 저밀도 관측 네트워크로 두 가지 격자형 기상자료를 생산한다. 생산한 격자형 기상자료 중 저밀도 관측 네트워크의 자료를 입력 자료로, 고밀도 관측 네트워크의 자료를 출력 자료로 선정하여 각 격자점에 대해 Long-Short Term Memory(LSTM) 알고리즘을 개발한다. 이 때, 멀티 그래픽 처리장치(GPU)에 기반한 병렬 처리를 통해 비용 효율적인 계산이 가능하도록 한다. 최종적으로 1973년부터 1999년까지의 저밀도 관측 네트워크의 격자형 기상자료를 입력 자료로 하여 해당 기간에 대한 고밀도 관측 네트워크의 격자형 기상자료를 생산한다. 개발된 대부분의 예측 모델 결과가 0.9 이상의 NSE 값을 나타낸다. 따라서, 본 연구에서 개발된 모델은 고품질의 장기간 기상자료를 효율적으로 정확도 높게 산출하며, 이는 향후 장기간 기후 추세 및 변동 분석에 중요 자료로 활용 가능하다.

  • PDF

Comparison of Fault Diagnosis Accuracy Between XGBoost and Conv1D Using Long-Term Operation Data of Ship Fuel Supply Instruments (선박 연료 공급 기기류의 장시간 운전 데이터의 고장 진단에 있어서 XGBoost 및 Conv1D의 예측 정확성 비교)

  • Hyung-Jin Kim;Kwang-Sik Kim;Se-Yun Hwang;Jang-Hyun Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.110-110
    • /
    • 2022
  • 본 연구는 자율운항 선박의 원격 고장 진단 기법 개발의 일부로 수행되었다. 특히, 엔진 연료 계통 장비로부터 계측된 시계열 데이터로부터 상태 진단을 위한 알고리즘 구현 결과를 제시하였다. 엔진 연료 펌프와 청정기를 가진 육상 실험 장비로부터 진동 시계열 데이터 계측하였으며, 이상 감지, 고장 분류 및 고장 예측이 가능한 심층 학습(Deep Learning) 및 기계 학습(Machine Learning) 알고리즘을 구현하였다. 육상 실험 장비에 고장 유형 별로 인위적인 고장을 발생시켜 특징적인 진동 신호를 계측하여, 인공 지능 학습에 이용하였다. 계측된 신호 데이터는 선행 발생한 사건의 신호가 후행 사건에 영향을 미치는 특성을 가지고 있으므로, 시계열에 내포된 고장 상태는 시간 간의 선후 종속성을 반영할 수 있는 학습 알고리즘을 제시하였다. 고장 사건의 시간 종속성을 반영할 수 있도록 순환(Recurrent) 계열의 RNN(Recurrent Neural Networks), LSTM(Long Short-Term Memory models)의 모델과 합성곱 연산 (Convolution Neural Network)을 기반으로 하는 Conv1D 모델을 적용하여 예측 정확성을 비교하였다. 특히, 합성곱 계열의 RNN LSTM 모델이 고차원의 순차적 자연어 언어 처리에 장점을 보이는 모델임을 착안하여, 신호의 시간 종속성을 학습에 반영할 수 있는 합성곱 계열의 Conv1 알고리즘을 고장 예측에 사용하였다. 또한 기계 학습 모델의 효율성을 감안하여 XGBoost를 추가로 적용하여 고장 예측을 시도하였다. 최종적으로 연료 펌프와 청정기의 진동 신호로부터 Conv1D 모델과 XGBoost 모델의 고장 예측 성능 결과를 비교하였다

  • PDF

Anomaly detection in blade pitch systems of floating wind turbines using LSTM-Autoencoder (LSTM-Autoencoder를 이용한 부유식 풍력터빈 블레이드 피치 시스템의 이상징후 감지)

  • Seongpil Cho
    • Journal of Aerospace System Engineering
    • /
    • v.18 no.4
    • /
    • pp.43-52
    • /
    • 2024
  • This paper presents an anomaly detection system that uses an LSTM-Autoencoder model to identify early-stage anomalies in the blade pitch system of floating wind turbines. The sensor data used in power plant monitoring systems is primarily composed of multivariate time-series data for each component. Comprising two unidirectional LSTM networks, the system skillfully uncovers long-term dependencies hidden within sequential time-series data. The autoencoder mechanism, learning solely from normal state data, effectively classifies abnormal states. Thus, by integrating these two networks, the system can proficiently detect anomalies. To confirm the effectiveness of the proposed framework, a real multivariate time-series dataset collected from a wind turbine model was employed. The LSTM-autoencoder model showed robust performance, achieving high classification accuracy.

Prediction of multipurpose dam inflow using deep learning (딥러닝을 활용한 다목적댐 유입량 예측)

  • Mok, Ji-Yoon;Choi, Ji-Hyeok;Moon, Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.2
    • /
    • pp.97-105
    • /
    • 2020
  • Recently, Artificial Neural Network receives attention as a data prediction method. Among these, a Long Shot-term Memory (LSTM) model specialized for time-series data prediction was utilized as a prediction method of hydrological time series data. In this study, the LSTM model was constructed utilizing deep running open source library TensorFlow which provided by Google, to predict inflows of multipurpose dams. We predicted the inflow of the Yongdam Multipurpose Dam which is located in the upper stream of the Geumgang. The hourly flow data of Yongdam Dam from 2006 to 2018 provided by WAMIS was used as the analysis data. Predictive analysis was performed under various of variable condition in order to compare and analyze the prediction accuracy according to four learning parameters of the LSTM model. Root mean square error (RMSE), Mean absolute error (MAE) and Volume error (VE) were calculated and evaluated its accuracy through comparing the predicted and observed inflows. We found that all the models had lower accuracy at high inflow rate and hourly precipitation data (2006~2018) of Yongdam Dam utilized as additional input variables to solve this problem. When the data of rainfall and inflow were utilized together, it was found that the accuracy of the prediction for the high flow rate is improved.

Development of Animation Materials for a Unit related to (중학교 화학전지에 관련된 동영상교수 자료의 개발 및 교육적 효과에 관한 연구)

  • Baek, Seong Hui;Kim, Jin Gyu
    • Journal of the Korean Chemical Society
    • /
    • v.46 no.5
    • /
    • pp.456-465
    • /
    • 2002
  • The purpose of this study was to investigate the educational effects of an animation materials developed with the macroscopic particle moving sight. The 11 animations developed by the researchers showed the movements of molecules, ions, and electrons. The materials were developed for teachers to use when they taught "electrochemical cell' unit. The subjects were 151 students of 9th grade who were divided into the experimental and control group and were taught during 16 hours. In order to figure out the characteristics of each student before the instructions, a short-version GALT(Group Assessment of Logical Thinking) and the pretest of conceptions were carried out. After the instructions, students tested 3 types of exam; the posttest of conceptions, attitude test connected with science, cognition test. After 4 months later, students tested the posttest of conceptions agin for long-term memory effect. It was found that the exper-imental group using the developed animation materials had significantly higher scores of conceptual understanding than control group. The experimental group had also significantly higher scores of the long-term memory test and attitude test than control group. The results mean that animation materials which shows the macroscopic particle movement help stu-dents to understand scientific concepts and to elevate interests.

Stock Prediction Model based on Bidirectional LSTM Recurrent Neural Network (양방향 LSTM 순환신경망 기반 주가예측모델)

  • Joo, Il-Taeck;Choi, Seung-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.204-208
    • /
    • 2018
  • In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

The Validity and Reliability of 'Computerized Neurocognitive Function Test' in the Elementary School Child (학령기 정상아동에서 '전산화 신경인지기능검사'의 타당도 및 신뢰도 분석)

  • Lee, Jong-Bum;Kim, Jin-Sung;Seo, Wan-Seok;Shin, Hyoun-Jin;Bai, Dai-Seg;Lee, Hye-Lin
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.11 no.2
    • /
    • pp.97-117
    • /
    • 2003
  • Objective: This study is to examine the validity and reliability of Computerized Neurocognitive Function Test among normal children in elementary school. Methods: K-ABC, K-PIC, and Computerized Neurocognitive Function Test were performed to the 120 body of normal children(10 of each male and female) from June, 2002 to January, 2003. Those children had over the average of intelligence and passed the rule out criteria. To verify test-retest reliability for those 30 children who were randomly selected, Computerized Neurocognitive Function Test was carried out again 4 weeks later. Results: As a results of correlation analysis for validity test, four of continues performance tests matched with those on adults. In the memory tests, results presented the same as previous research with a difference between forward test and backward test in short-term memory. In higher cognitive function tests, tests were consist of those with different purpose respectively. After performing factor analysis on 43 variables out of 12 tests, 10 factors were raised and the total percent of variance was 75.5%. The reasons were such as: 'sustained attention, information processing speed, vigilance, verbal learning, allocation of attention and concept formation, flexibility, concept formation, visual learning, short-term memory, and selective attention' in order. In correlation with K-ABC to prepare explanatory criteria, selectively significant correlation(p<.0.5-001) was found in subscale of K-ABC. In the test-retest reliability test, the results reflecting practice effect were found and prominent especially in higher cognitive function tests. However, split-half reliability(r=0.548-0.7726, p<.05) and internal consistency(0.628-0.878, p<.05) of each examined group were significantly high. Conclusion: The performance of Computerized Neurocognitive Function Test in normal children represented differ developmental character than that in adult. And basal information for preparing the explanatory criteria could be acquired by searching for the relation with standardized intelligence test which contains neuropsycological background.

  • PDF