• Title/Summary/Keyword: LSTM 신경망

Search Result 229, Processing Time 0.03 seconds

Estimating speech parameters for ultrasonic Doppler signal using LSTM recurrent neural networks (LSTM 순환 신경망을 이용한 초음파 도플러 신호의 음성 패러미터 추정)

  • Joo, Hyeong-Kil;Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.433-441
    • /
    • 2019
  • In this paper, a method of estimating speech parameters for ultrasonic Doppler signals reflected from the articulatory muscles using LSTM (Long Short Term Memory) RNN (Recurrent Neural Networks) was introduced and compared with the method using MLP (Multi-Layer Perceptrons). LSTM RNN were used to estimate the Fourier transform coefficients of speech signals from the ultrasonic Doppler signals. The log energy value of the Mel frequency band and the Fourier transform coefficients, which were extracted respectively from the ultrasonic Doppler signal and the speech signal, were used as the input and reference for training LSTM RNN. The performance of LSTM RNN and MLP was evaluated and compared by experiments using test data, and the RMSE (Root Mean Squared Error) was used as a measure. The RMSE of each experiment was 0.5810 and 0.7380, respectively. The difference was about 0.1570, so that it confirmed that the performance of the method using the LSTM RNN was better.

Comparing the Performance of Artificial Neural Networks and Long Short-Term Memory Networks for Rainfall-runoff Analysis (인공신경망과 장단기메모리 모형의 유출량 모의 성능 분석)

  • Kim, JiHye;Kang, Moon Seong;Kim, Seok Hyeon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.320-320
    • /
    • 2019
  • 유역의 수문 자료를 정확하게 분석하는 것은 수리 구조물을 효율적으로 운영하기 위한 중요한 요소이다. 인공신경망(Artificial Neural Networks, ANNs) 모형은 입 출력 자료의 비선형적인 관계를 해석할 수 있는 모형으로 강우-유출 해석 등 수문 분야에 다양하게 적용되어 왔다. 이후 기존의 인공신경망 모형을 연속적인(sequential) 자료의 분석에 더 적합하도록 개선한 회귀신경망(Recurrent Neural Networks, RNNs) 모형과 회귀신경망 모형의 '장기 의존성 문제'를 개선한 장단기메모리(Long Short-Term Memory Networks, 이하 LSTM)가 차례로 제안되었다. LSTM은 최근에 주목받는 딥 러닝(Deep learning) 기법의 하나로 수문 자료와 같은 시계열 자료의 분석에 뛰어난 성능을 보일 것으로 예상되며, 수문 분야에서 이에 대한 적용성 평가가 요구되고 있다. 본 연구에서는 인공신경망 모형과 LSTM 모형으로 유출량을 모의하여 두 모형의 성능을 비교하고 향후 LSTM 모형의 활용 가능성을 검토하고자 하였다. 나주 수위관측소의 수위 자료와 인접한 기상관측소의 강우량 자료로 모형의 입 출력 자료를 구성하여 강우 사상에 대한 시간별 유출량을 모의하였다. 연구 결과, 1시간 후의 유출량에 대해서는 두 모형 모두 뛰어난 모의 능력을 보였으나, 선행 시간이 길어질수록 LSTM의 정확성은 유지되는 반면 인공신경망 모형의 정확성은 점차 떨어지는 것으로 나타났다. 앞으로의 연구에서 유역 내 다양한 수리 구조물에 의한 유 출입량을 추가로 고려한다면 LSTM 모형의 활용성을 보다 더 확장할 수 있을 것이다.

  • PDF

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.

Prediction of the Stress-Strain Curve of Materials under Uniaxial Compression by Using LSTM Recurrent Neural Network (LSTM 순환 신경망을 이용한 재료의 단축하중 하에서의 응력-변형률 곡선 예측 연구)

  • Byun, Hoon;Song, Jae-Joon
    • Tunnel and Underground Space
    • /
    • v.28 no.3
    • /
    • pp.277-291
    • /
    • 2018
  • LSTM (Long Short-Term Memory) algorithm which is a kind of recurrent neural network was used to establish a model to predict the stress-strain curve of an material under uniaxial compression. The model was established from the stress-strain data from uniaxial compression tests of silica-gypsum specimens. After training the model, it can predict the behavior of the material up to the failure state by using an early stage of stress-strain curve whose stress is very low. Because the LSTM neural network predict a value by using the previous state of data and proceed forward step by step, a higher error was found at the prediction of higher stress state due to the accumulation of error. However, this model generally predict the stress-strain curve with high accuracy. The accuracy of both LSTM and tangential prediction models increased with increased length of input data, while a difference in performance between them decreased as the amount of input data increased. LSTM model showed relatively superior performance to the tangential prediction when only few input data was given, which enhanced the necessity for application of the model.

Stock Prediction Model based on Bidirectional LSTM Recurrent Neural Network (양방향 LSTM 순환신경망 기반 주가예측모델)

  • Joo, Il-Taeck;Choi, Seung-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.204-208
    • /
    • 2018
  • In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

Urban Flood Prediction using LSTM and SOM (LSTM과 SOM을 적용한 도시지역 침수예측)

  • Lee, Yeonsu;Yu, Jae-Hwan;Kim, Byunghyun;Han, Kun-Yeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.325-325
    • /
    • 2021
  • 딥러닝을 이용한 침수해석은 강우자료와 그에 대한 1차원 EPA-SWMM 결과인 총월류량을 인공신경망에 학습시키고, 학습시킨 인공신경망을 테스트하기 위해 또다른 강우자료를 인공신경망으로 예측해서, 이것이 해석결과를 얼마나 잘 나타내는지 확인하고, 인공신경망이 모의한 총월류량을 잘 나타낸다면 인공신경망을 잘 학습시킨 것으로 판단하여 새로운 강우가 발생했을 때 새로운 강우자료에 대해 매번 새로 1차원, 2차원해석을 하는 것을 대신하여 인공신경망만으로 총월류량을 예측할 수 있게 되는 것이다. 강우자료를 입력자료로 사용하게 되는데, 강우량만으로는 그 강우의 특성을 전부 나타낸다고 할 수 없기 때문에 지속기간과 총강우량, 왜도(skewness), 표준편차를 추가적인 입력자료로 사용한다. 1차원, 2차원 해석결과인 총월류량은 입력자료에 대한 타깃자료가 되어, 인공신경망을 테스트하거나 실제로 이용할 때 비슷한 지속기간과 총강우량, 왜도, 표준편차를 가진 강우가 발생했을 때 타깃자료를 이용해 총월류량을 예측하는 것이다. 인공신경망이 얼마나 잘 학습되었는지 확인하기 위해서 침수지도를 작성해볼 필요가 있다. 1차원, 2차원 모의해석으로 나온 총월류량과, 인공신경망을 이용해 예측한 총월류량을 이용해 각각 침수지도를 작성하여 시각적 자료로 변환하여 비교하고, 침수지도가 일치한다면 인공신경망이 잘 학습되었다고 판단할 수 있고, 새로운 강우가 발생하면 학습시킨 인공신경망을 통해 1차원, 2차원 모의해석을 하지 않고도 총월류량을 예측할 수 있다.

  • PDF

Background subtraction using LSTM and spatial recurrent neural network (장단기 기억 신경망과 공간적 순환 신경망을 이용한 배경차분)

  • Choo, Sungkwon;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.13-16
    • /
    • 2016
  • 본 논문에서는 순환 신경망을 이용하여 동영상에서의 배경과 전경을 구분하는 알고리즘을 제안한다. 순환 신경망은 일련의 순차적인 입력에 대해서 내부의 루프(loop)를 통해 이전 입력에 의한 정보를 지속할 수 있도록 구성되는 신경망을 말한다. 순환 신경망의 여러 구조들 가운데, 우리는 장기적인 관계에도 반응할 수 있도록 장단기 기억 신경망(Long short-term memory networks, LSTM)을 사용했다. 그리고 동영상에서의 시간적인 연결 뿐 아니라 공간적인 연관성도 배경과 전경을 판단하는 것에 영향을 미치기 때문에, 공간적 순환 신경망을 적용하여 내부 신경망(hidden layer)들의 정보가 공간적으로 전달될 수 있도록 신경망을 구성하였다. 제안하는 알고리즘은 기본적인 배경차분 동영상에 대해 기존 알고리즘들과 비교할만한 결과를 보인다.

  • PDF

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

Psalm Text Generator Comparison Between English and Korean Using LSTM Blocks in a Recurrent Neural Network (순환 신경망에서 LSTM 블록을 사용한 영어와 한국어의 시편 생성기 비교)

  • Snowberger, Aaron Daniel;Lee, Choong Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.269-271
    • /
    • 2022
  • In recent years, RNN networks with LSTM blocks have been used extensively in machine learning tasks that process sequential data. These networks have proven to be particularly good at sequential language processing tasks by being more able to accurately predict the next most likely word in a given sequence than traditional neural networks. This study trained an RNN / LSTM neural network on three different translations of 150 biblical Psalms - in both English and Korean. The resulting model is then fed an input word and a length number from which it automatically generates a new Psalm of the desired length based on the patterns it recognized while training. The results of training the network on both English text and Korean text are compared and discussed.

  • PDF