• 제목/요약/키워드: Long Short-Term Memory Network

검색결과 326건 처리시간 0.02초

An Approach for Stock Price Forecast using Long Short Term Memory

  • K.A.Surya Rajeswar;Pon Ramalingam;Sudalaimuthu.T
    • International Journal of Computer Science & Network Security
    • /
    • 제23권4호
    • /
    • pp.166-171
    • /
    • 2023
  • The Stock price analysis is an increasing concern in a financial time series. The purpose of the study is to analyze the price parameters of date, high, low, and news feed about the stock exchange price. Long short term memory (LSTM) is a cutting-edge technology used for predicting the data based on time series. LSTM performs well in executing large sequence of data. This paper presents the Long Short Term Memory Model has used to analyze the stock price ranges of 10 days and 20 days by exponential moving average. The proposed approach gives better performance using technical indicators of stock price with an accuracy of 82.6% and cross entropy of 71%.

Long Short-Term Memory를 이용한 부산항 조위 예측 (Tidal Level Prediction of Busan Port using Long Short-Term Memory)

  • 김해림;전용호;박재형;윤한삼
    • 해양환경안전학회지
    • /
    • 제28권4호
    • /
    • pp.469-476
    • /
    • 2022
  • 본 연구는 조위 관측자료를 이용하여 부산항에서의 장기 조위 자료를 생성하는 Long Short-Term Memory (LSTM)으로 구현된 순환신경망 모델을 개발하였다. 국립해양조사원의 부산 신항과 통영에서 관측된 조위 자료를 모델 입력 자료로 사용하여 부산항의 조위를 예측하였다. 모델에 대하여 2019년 1월 한 달의 학습을 수행하였으며, 이후 2019년 2월에서 2020년 1월까지 1년에 대하여 정확도를 계산하였다. 구축된 모델은 부산 신항과 통영의 조위 시계열을 함께 입력한 경우에 상관계수 0.997 및 평균 제곱근 오차 2.69 m로 가장 성능이 높았다. 본 연구 결과를 바탕으로 딥러닝 순환신경망 모델을 이용하여 임의 항만의 장기 조위 자료 예측이 가능함을 알 수 있었다.

Comparison of Fall Detection Systems Based on YOLOPose and Long Short-Term Memory

  • Seung Su Jeong;Nam Ho Kim;Yun Seop Yu
    • Journal of information and communication convergence engineering
    • /
    • 제22권2호
    • /
    • pp.139-144
    • /
    • 2024
  • In this study, four types of fall detection systems - designed with YOLOPose, principal component analysis (PCA), convolutional neural network (CNN), and long short-term memory (LSTM) architectures - were developed and compared in the detection of everyday falls. The experimental dataset encompassed seven types of activities: walking, lying, jumping, jumping in activities of daily living, falling backward, falling forward, and falling sideways. Keypoints extracted from YOLOPose were entered into the following architectures: RAW-LSTM, PCA-LSTM, RAW-PCA-LSTM, and PCA-CNN-LSTM. For the PCA architectures, the reduced input size stemming from a dimensionality reduction enhanced the operational efficiency in terms of computational time and memory at the cost of decreased accuracy. In contrast, the addition of a CNN resulted in higher complexity and lower accuracy. The RAW-LSTM architecture, which did not include either PCA or CNN, had the least number of parameters, which resulted in the best computational time and memory while also achieving the highest accuracy.

어텐션 메커니즘 기반 Long-Short Term Memory Network를 이용한 EEG 신호 기반의 감정 분류 기법 (Emotion Classification based on EEG signals with LSTM deep learning method)

  • 김유민;최아영
    • 한국산업정보학회논문지
    • /
    • 제26권1호
    • /
    • pp.1-10
    • /
    • 2021
  • 본 연구에서는 EEG 신호를 기반으로 감정 인식에 유용한 딥러닝 기법을 제안한다. 감정이 시간에 따라 변화하는 특성을 반영하기 위해 Long-Short Term Memory 네트워크를 사용하였다. 또한, 특정 시점의 감정적 상태가 전체 감정 상태에 영향을 미친다는 이론을 기반으로 특정 순간의 감정 상태에 가중치를 주기 위해 어텐션 메커니즘을 적용했다. EEG 신호는 DEAP 데이터베이스를 사용하였으며, 감정은 긍정과 부정의 정도를 나타내는 정서가(Valence)와 감정의 정도를 나타내는 각성(Arousal) 모델을 사용하였다. 실험 결과 정서가(Valence)와 각성(Arousal)을 2단계(낮음, 높음)로 나누었을 때 분석 정확도는 정서가(Valence)의 경우 90.1%, 각성(Arousal)의 경우 88.1%이다. 낮음, 중간, 높음의 3단계로 감정을 구분한 경우 정서가(Valence)는 83.5%, 각성(Arousal)은 82.5%의 정확도를 보였다.

양방향 장단기 메모리 신경망을 이용한 욕설 검출 (Abusive Detection Using Bidirectional Long Short-Term Memory Networks)

  • 나인섭;이신우;이재학;고진광
    • 한국빅데이터학회지
    • /
    • 제4권2호
    • /
    • pp.35-45
    • /
    • 2019
  • 욕설과 비속어를 포함한 악성 댓글에 대한 피해는 최근 언론에 나오는 연애인의 자살뿐만 아니라 사회 전반에서 다양한 형태로 증가하고 있다. 이 논문에서는 양방향 장단기 메모리 신경망 모델을 이용하여 욕설을 검출하는 기법을 제시하였다. 웹 크룰러를 통해 웹상의 댓글을 수집하고, 영어나 특수문자 등의 사용하지 않은 글에 대해 불용어 처리를 하였다. 불용어 처리된 댓글에 대해 문장의 전·후 관계를 고려한 양방향 장단기 메모리 신경망 모델을 적용하여 욕설 여부를 판단하고 검출하였다. 양방향 장단기 메모리 신경망을 사용하기 위해 검출된 댓글에 대해 형태소 분석과 벡터화 과정을 거쳤으며 각 단어들에 욕설 해당 여부를 라벨링하여 진행하였다. 실험 결과 정제하고 수집된 총 9,288개의 댓글에 대해 88.79%의 성능을 나타내었다.

  • PDF

An accident diagnosis algorithm using long short-term memory

  • Yang, Jaemin;Kim, Jonghyun
    • Nuclear Engineering and Technology
    • /
    • 제50권4호
    • /
    • pp.582-588
    • /
    • 2018
  • Accident diagnosis is one of the complex tasks for nuclear power plant (NPP) operators. In abnormal or emergency situations, the diagnostic activity of the NPP states is burdensome though necessary. Numerous computer-based methods and operator support systems have been suggested to address this problem. Among them, the recurrent neural network (RNN) has performed well at analyzing time series data. This study proposes an algorithm for accident diagnosis using long short-term memory (LSTM), which is a kind of RNN, which improves the limitation for time reflection. The algorithm consists of preprocessing, the LSTM network, and postprocessing. In the LSTM-based algorithm, preprocessed input variables are calculated to output the accident diagnosis results. The outputs are also postprocessed using softmax to determine the ranking of accident diagnosis results with probabilities. This algorithm was trained using a compact nuclear simulator for several accidents: a loss of coolant accident, a steam generator tube rupture, and a main steam line break. The trained algorithm was also tested to demonstrate the feasibility of diagnosing NPP accidents.

Long Short-Term Memory를 활용한 건화물운임지수 예측 (Prediction of Baltic Dry Index by Applications of Long Short-Term Memory)

  • 한민수;유성진
    • 품질경영학회지
    • /
    • 제47권3호
    • /
    • pp.497-508
    • /
    • 2019
  • Purpose: The purpose of this study is to overcome limitations of conventional studies that to predict Baltic Dry Index (BDI). The study proposed applications of Artificial Neural Network (ANN) named Long Short-Term Memory (LSTM) to predict BDI. Methods: The BDI time-series prediction was carried out through eight variables related to the dry bulk market. The prediction was conducted in two steps. First, identifying the goodness of fitness for the BDI time-series of specific ANN models and determining the network structures to be used in the next step. While using ANN's generalization capability, the structures determined in the previous steps were used in the empirical prediction step, and the sliding-window method was applied to make a daily (one-day ahead) prediction. Results: At the empirical prediction step, it was possible to predict variable y(BDI time series) at point of time t by 8 variables (related to the dry bulk market) of x at point of time (t-1). LSTM, known to be good at learning over a long period of time, showed the best performance with higher predictive accuracy compared to Multi-Layer Perceptron (MLP) and Recurrent Neural Network (RNN). Conclusion: Applying this study to real business would require long-term predictions by applying more detailed forecasting techniques. I hope that the research can provide a point of reference in the dry bulk market, and furthermore in the decision-making and investment in the future of the shipping business as a whole.

The roles of differencing and dimension reduction in machine learning forecasting of employment level using the FRED big data

  • Choi, Ji-Eun;Shin, Dong Wan
    • Communications for Statistical Applications and Methods
    • /
    • 제26권5호
    • /
    • pp.497-506
    • /
    • 2019
  • Forecasting the U.S. employment level is made using machine learning methods of the artificial neural network: deep neural network, long short term memory (LSTM), gated recurrent unit (GRU). We consider the big data of the federal reserve economic data among which 105 important macroeconomic variables chosen by McCracken and Ng (Journal of Business and Economic Statistics, 34, 574-589, 2016) are considered as predictors. We investigate the influence of the two statistical issues of the dimension reduction and time series differencing on the machine learning forecast. An out-of-sample forecast comparison shows that (LSTM, GRU) with differencing performs better than the autoregressive model and the dimension reduction improves long-term forecasts and some short-term forecasts.

Long Short Term Memory based Political Polarity Analysis in Cyber Public Sphere

  • Kang, Hyeon;Kang, Dae-Ki
    • International Journal of Advanced Culture Technology
    • /
    • 제5권4호
    • /
    • pp.57-62
    • /
    • 2017
  • In this paper, we applied long short term memory(LSTM) for classifying political polarity in cyber public sphere. The data collected from the cyber public sphere is transformed into word corpus data through word embedding. Based on this word corpus data, we train recurrent neural network (RNN) which is connected by LSTM's. Softmax function is applied at the output of the RNN. We conducted our proposed system to obtain experimental results, and we will enhance our proposed system by refining LSTM in our system.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • 제44권4호
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.