• Title/Summary/Keyword: LSTM Layer

Search Result 75, Processing Time 0.021 seconds

Layer Normalized LSTM CRFs for Korean Semantic Role Labeling (Layer Normalized LSTM CRF를 이용한 한국어 의미역 결정)

  • Park, Kwang-Hyeon;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.163-166
    • /
    • 2017
  • 딥러닝은 모델이 복잡해질수록 Train 시간이 오래 걸리는 작업이다. Layer Normalization은 Train 시간을 줄이고, layer를 정규화 함으로써 성능을 개선할 수 있는 방법이다. 본 논문에서는 한국어 의미역 결정을 위해 Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델을 제안한다. 실험 결과, Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델은 한국어 의미역 결정 논항 인식 및 분류(AIC)에서 성능을 개선시켰다.

  • PDF

Layer Normalized LSTM CRFs for Korean Semantic Role Labeling (Layer Normalized LSTM CRF를 이용한 한국어 의미역 결정)

  • Park, Kwang-Hyeon;Na, Seung-Hoon
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.163-166
    • /
    • 2017
  • 딥러닝은 모델이 복잡해질수록 Train 시간이 오래 걸리는 작업이다. Layer Normalization은 Train 시간을 줄이고, layer를 정규화 함으로써 성능을 개선할 수 있는 방법이다. 본 논문에서는 한국어 의미역 결정을 위해 Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델을 제안한다. 실험 결과, Layer Normalization이 적용 된 Bidirectional LSTM CRF 모델은 한국어 의미역 결정 논항 인식 및 분류(AIC)에서 성능을 개선시켰다.

  • PDF

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Estimating speech parameters for ultrasonic Doppler signal using LSTM recurrent neural networks (LSTM 순환 신경망을 이용한 초음파 도플러 신호의 음성 패러미터 추정)

  • Joo, Hyeong-Kil;Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.433-441
    • /
    • 2019
  • In this paper, a method of estimating speech parameters for ultrasonic Doppler signals reflected from the articulatory muscles using LSTM (Long Short Term Memory) RNN (Recurrent Neural Networks) was introduced and compared with the method using MLP (Multi-Layer Perceptrons). LSTM RNN were used to estimate the Fourier transform coefficients of speech signals from the ultrasonic Doppler signals. The log energy value of the Mel frequency band and the Fourier transform coefficients, which were extracted respectively from the ultrasonic Doppler signal and the speech signal, were used as the input and reference for training LSTM RNN. The performance of LSTM RNN and MLP was evaluated and compared by experiments using test data, and the RMSE (Root Mean Squared Error) was used as a measure. The RMSE of each experiment was 0.5810 and 0.7380, respectively. The difference was about 0.1570, so that it confirmed that the performance of the method using the LSTM RNN was better.

Analysis of Accuracy and Loss Performance According to Hyperparameter in RNN Model (RNN모델에서 하이퍼파라미터 변화에 따른 정확도와 손실 성능 분석)

  • Kim, Joon-Yong;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.31-38
    • /
    • 2021
  • In this paper, in order to obtain the optimization of the RNN model used for sentiment analysis, the correlation of each model was studied by observing the trend of loss and accuracy according to hyperparameter tuning. As a research method, after configuring the hidden layer with LSTM and the embedding layer that are most optimized to process sequential data, the loss and accuracy of each model were measured by tuning the unit, batch-size, and embedding size of the LSTM. As a result of the measurement, the loss was 41.9% and the accuracy was 11.4%, and the trend of the optimization model showed a consistently stable graph, confirming that the tuning of the hyperparameter had a profound effect on the model. In addition, it was confirmed that the decision of the embedding size among the three hyperparameters had the greatest influence on the model. In the future, this research will be continued, and research on an algorithm that allows the model to directly find the optimal hyperparameter will continue.

Performance Comparison Analysis on Named Entity Recognition system with Bi-LSTM based Multi-task Learning (다중작업학습 기법을 적용한 Bi-LSTM 개체명 인식 시스템 성능 비교 분석)

  • Kim, GyeongMin;Han, Seunggnyu;Oh, Dongsuk;Lim, HeuiSeok
    • Journal of Digital Convergence
    • /
    • v.17 no.12
    • /
    • pp.243-248
    • /
    • 2019
  • Multi-Task Learning(MTL) is a training method that trains a single neural network with multiple tasks influences each other. In this paper, we compare performance of MTL Named entity recognition(NER) model trained with Korean traditional culture corpus and other NER model. In training process, each Bi-LSTM layer of Part of speech tagging(POS-tagging) and NER are propagated from a Bi-LSTM layer to obtain the joint loss. As a result, the MTL based Bi-LSTM model shows 1.1%~4.6% performance improvement compared to single Bi-LSTM models.

Application of Informer for time-series NO2 prediction

  • Hye Yeon Sin;Minchul Kang;Joonsung Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.11-18
    • /
    • 2023
  • In this paper, we evaluate deep learning time series forecasting models. Recent studies show that those models perform better than the traditional prediction model such as ARIMA. Among them, recurrent neural networks to store previous information in the hidden layer are one of the prediction models. In order to solve the gradient vanishing problem in the network, LSTM is used with small memory inside the recurrent neural network along with BI-LSTM in which the hidden layer is added in the reverse direction of the data flow. In this paper, we compared the performance of Informer by comparing with other models (LSTM, BI-LSTM, and Transformer) for real Nitrogen dioxide (NO2) data. In order to evaluate the accuracy of each method, mean square root error and mean absolute error between the real value and the predicted value were obtained. Consequently, Informer has improved prediction accuracy compared with other methods.

Research on Chinese Microblog Sentiment Classification Based on TextCNN-BiLSTM Model

  • Haiqin Tang;Ruirui Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.842-857
    • /
    • 2023
  • Currently, most sentiment classification models on microblogging platforms analyze sentence parts of speech and emoticons without comprehending users' emotional inclinations and grasping moral nuances. This study proposes a hybrid sentiment analysis model. Given the distinct nature of microblog comments, the model employs a combined stop-word list and word2vec for word vectorization. To mitigate local information loss, the TextCNN model, devoid of pooling layers, is employed for local feature extraction, while BiLSTM is utilized for contextual feature extraction in deep learning. Subsequently, microblog comment sentiments are categorized using a classification layer. Given the binary classification task at the output layer and the numerous hidden layers within BiLSTM, the Tanh activation function is adopted in this model. Experimental findings demonstrate that the enhanced TextCNN-BiLSTM model attains a precision of 94.75%. This represents a 1.21%, 1.25%, and 1.25% enhancement in precision, recall, and F1 values, respectively, in comparison to the individual deep learning models TextCNN. Furthermore, it outperforms BiLSTM by 0.78%, 0.9%, and 0.9% in precision, recall, and F1 values.

Verification of Transliteration Pairs Using Distance LSTM-CNN with Layer Normalization (Distance LSTM-CNN with Layer Normalization을 이용한 음차 표기 대역 쌍 판별)

  • Lee, Changsu;Cheon, Juryong;Kim, Joogeun;Kim, Taeil;Kang, Inho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.76-81
    • /
    • 2017
  • 외국어로 구성된 용어를 발음에 기반하여 자국의 언어로 표기하는 것을 음차 표기라 한다. 국가 간의 경계가 허물어짐에 따라, 외국어에 기원을 두는 용어를 설명하기 위해 뉴스 등 다양한 웹 문서에서는 동일한 발음을 가지는 외국어 표기와 한국어 표기를 혼용하여 사용하고 있다. 이에 좋은 검색 결과를 가져오기 위해서는 외국어 표기와 더불어 사람들이 많이 사용하는 다양한 음차 표기를 함께 검색에 활용하는 것이 중요하다. 음차 표기 모델과 음차 표기 대역 쌍 추출을 통해 음차 표현을 생성하는 기존 방법 대신, 본 논문에서는 신뢰할 수 있는 다양한 음차 표현을 찾기 위해 문서에서 음차 표기 후보를 찾고, 이 음차 표기 후보가 정확한 표기인지 판별하는 방식을 제안한다. 다양한 딥러닝 모델을 비교, 검토하여 최종적으로 음차 표기 대역 쌍 판별에 특화된 모델인 Distance LSTM-CNN 모델을 제안하며, 제안하는 모델의 Batch Size 영향을 줄이고 학습 시 수렴 속도 개선을 위해 Layer Normalization을 적용하는 방법을 보인다.

  • PDF

Prediction of Dissolved Oxygen in Jindong Bay Using Time Series Analysis (시계열 분석을 이용한 진동만의 용존산소량 예측)

  • Han, Myeong-Soo;Park, Sung-Eun;Choi, Youngjin;Kim, Youngmin;Hwang, Jae-Dong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.4
    • /
    • pp.382-391
    • /
    • 2020
  • In this study, we used artificial intelligence algorithms for the prediction of dissolved oxygen in Jindong Bay. To determine missing values in the observational data, we used the Bidirectional Recurrent Imputation for Time Series (BRITS) deep learning algorithm, Auto-Regressive Integrated Moving Average (ARIMA), a widely used time series analysis method, and the Long Short-Term Memory (LSTM) deep learning method were used to predict the dissolved oxygen. We also compared accuracy of ARIMA and LSTM. The missing values were determined with high accuracy by BRITS in the surface layer; however, the accuracy was low in the lower layers. The accuracy of BRITS was unstable due to the experimental conditions in the middle layer. In the middle and bottom layers, the LSTM model showed higher accuracy than the ARIMA model, whereas the ARIMA model showed superior performance in the surface layer.