• Title/Summary/Keyword: LSTM encoder

Search Result 33, Processing Time 0.026 seconds

Financial Market Prediction and Improving the Performance Based on Large-scale Exogenous Variables and Deep Neural Networks (대규모 외생 변수 및 Deep Neural Network 기반 금융 시장 예측 및 성능 향상)

  • Cheon, Sung Gil;Lee, Ju Hong;Choi, Bum Ghi;Song, Jae Won
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.26-35
    • /
    • 2020
  • Attempts to predict future stock prices have been studied steadily since the past. However, unlike general time-series data, financial time-series data has various obstacles to making predictions such as non-stationarity, long-term dependence, and non-linearity. In addition, variables of a wide range of data have limitations in the selection by humans, and the model should be able to automatically extract variables well. In this paper, we propose a 'sliding time step normalization' method that can normalize non-stationary data and LSTM autoencoder to compress variables from all variables. and 'moving transfer learning', which divides periods and performs transfer learning. In addition, the experiment shows that the performance is superior when using as many variables as possible through the neural network rather than using only 100 major financial variables and by using 'sliding time step normalization' to normalize the non-stationarity of data in all sections, it is shown to be effective in improving performance. 'moving transfer learning' shows that it is effective in improving the performance in long test intervals by evaluating the performance of the model and performing transfer learning in the test interval for each step.

Multimodal Sentiment Analysis Using Review Data and Product Information (리뷰 데이터와 제품 정보를 이용한 멀티모달 감성분석)

  • Hwang, Hohyun;Lee, Kyeongchan;Yu, Jinyi;Lee, Younghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.1
    • /
    • pp.15-28
    • /
    • 2022
  • Due to recent expansion of online market such as clothing, utilizing customer review has become a major marketing measure. User review has been used as a tool of analyzing sentiment of customers. Sentiment analysis can be largely classified with machine learning-based and lexicon-based method. Machine learning-based method is a learning classification model referring review and labels. As research of sentiment analysis has been developed, multi-modal models learned by images and video data in reviews has been studied. Characteristics of words in reviews are differentiated depending on products' and customers' categories. In this paper, sentiment is analyzed via considering review data and metadata of products and users. Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Self Attention-based Multi-head Attention models and Bidirectional Encoder Representation from Transformer (BERT) are used in this study. Same Multi-Layer Perceptron (MLP) model is used upon every products information. This paper suggests a multi-modal sentiment analysis model that simultaneously considers user reviews and product meta-information.

ViStoryNet: Neural Networks with Successive Event Order Embedding and BiLSTMs for Video Story Regeneration (ViStoryNet: 비디오 스토리 재현을 위한 연속 이벤트 임베딩 및 BiLSTM 기반 신경망)

  • Heo, Min-Oh;Kim, Kyung-Min;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.138-144
    • /
    • 2018
  • A video is a vivid medium similar to human's visual-linguistic experiences, since it can inculcate a sequence of situations, actions or dialogues that can be told as a story. In this study, we propose story learning/regeneration frameworks from videos with successive event order supervision for contextual coherence. The supervision induces each episode to have a form of trajectory in the latent space, which constructs a composite representation of ordering and semantics. In this study, we incorporated the use of kids videos as a training data. Some of the advantages associated with the kids videos include omnibus style, simple/explicit storyline in short, chronological narrative order, and relatively limited number of characters and spatial environments. We build the encoder-decoder structure with successive event order embedding, and train bi-directional LSTMs as sequence models considering multi-step sequence prediction. Using a series of approximately 200 episodes of kids videos named 'Pororo the Little Penguin', we give empirical results for story regeneration tasks and SEOE. In addition, each episode shows a trajectory-like shape on the latent space of the model, which gives the geometric information for the sequence models.

Deep Learning-Based Stock Fluctuation Prediction According to Overseas Indices and Trading Trend by Investors (해외지수와 투자자별 매매 동향에 따른 딥러닝 기반 주가 등락 예측)

  • Kim, Tae Seung;Lee, Soowon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.367-374
    • /
    • 2021
  • Stock price prediction is a subject of research in various fields such as economy, statistics, computer engineering, etc. In recent years, researches on predicting the movement of stock prices by learning artificial intelligence models from various indicators such as basic indicators and technical indicators have become active. This study proposes a deep learning model that predicts the ups and downs of KOSPI from overseas indices such as S&P500, past KOSPI indices, and trading trends by KOSPI investors. The proposed model extracts a latent variable using a stacked auto-encoder to predict stock price fluctuations, and predicts the fluctuation of the closing price compared to the market price of the day by learning an LSTM suitable for learning time series data from the extracted latent variable to decide to buy or sell based on the value. As a result of comparing the returns and prediction accuracy of the proposed model and the comparative models, the proposed model showed better performance than the comparative models.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Context-Awareness Cat Behavior Captioning System (반려묘의 상황인지형 행동 캡셔닝 시스템)

  • Chae, Heechan;Choi, Yoona;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.21-29
    • /
    • 2021
  • With the recent increase in the number of households raising pets, various engineering studies have been underway for pets. The final purpose of this study is to automatically generate situation-sensitive captions that can express implicit intentions based on the behavior and sound of cats by embedding the already mature behavioral detection technology of pets as basic element technology in the video capturing research. As a pilot project to this end, this paper proposes a high-level capturing system using optical-flow, RGB, and sound information of cat videos. That is, the proposed system uses video datasets collected in an actual breeding environment to extract feature vectors from the video and sound, then through hierarchical LSTM encoder and decoder, to identify the cat's behavior and its implicit intentions, and to perform learning to create context-sensitive captions. The performance of the proposed system was verified experimentally by utilizing video data collected in the environment where actual cats are raised.

Towards Improving Causality Mining using BERT with Multi-level Feature Networks

  • Ali, Wajid;Zuo, Wanli;Ali, Rahman;Rahman, Gohar;Zuo, Xianglin;Ullah, Inam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3230-3255
    • /
    • 2022
  • Causality mining in NLP is a significant area of interest, which benefits in many daily life applications, including decision making, business risk management, question answering, future event prediction, scenario generation, and information retrieval. Mining those causalities was a challenging and open problem for the prior non-statistical and statistical techniques using web sources that required hand-crafted linguistics patterns for feature engineering, which were subject to domain knowledge and required much human effort. Those studies overlooked implicit, ambiguous, and heterogeneous causality and focused on explicit causality mining. In contrast to statistical and non-statistical approaches, we present Bidirectional Encoder Representations from Transformers (BERT) integrated with Multi-level Feature Networks (MFN) for causality recognition, called BERT+MFN for causality recognition in noisy and informal web datasets without human-designed features. In our model, MFN consists of a three-column knowledge-oriented network (TC-KN), bi-LSTM, and Relation Network (RN) that mine causality information at the segment level. BERT captures semantic features at the word level. We perform experiments on Alternative Lexicalization (AltLexes) datasets. The experimental outcomes show that our model outperforms baseline causality and text mining techniques.

Developing radar-based rainfall prediction model with GAN(Generative Adversarial Network) (생성적 적대 신경망(GAN)을 활용한 강우예측모델 개발)

  • Choi, Suyeon;Sohn, Soyoung;Kim, Yeonjoo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.185-185
    • /
    • 2021
  • 기후변화로 인한 돌발 강우 등 이상 기후 현상이 증가함에 따라 정확한 강우예측의 중요성은 더 증가하는 추세이다. 전통적인 강우예측의 경우 기상수치모델 또는 외삽법을 이용한 레이더 기반 강우예측 기법을 이용하며, 최근 머신러닝 기술의 발달에 따라 이를 활용한 레이더 자료기반 강우예측기법이 개발되고 있다. 기존 머신러닝을 이용한 강우예측 모델의 경우 주로 시계열 이미지 예측에 적합한 2차원 순환 신경망 기반 기법(Convolutional Long Short-Term Memory, ConvLSTM) 또는 합성곱 신경망 기반 기법(Convolutional Neural Network(CNN) Encoder-Decoder) 등을 이용한다. 본 연구에서는 생성적 적대 신경망 기반 기법(Generative Adversarial Network, GAN)을 이용해 미래 강우예측을 수행하도록 하였다. GAN 방법론은 이미지를 생성하는 생성자와 이를 실제 이미지와 구분하는 구별자가 경쟁하며 학습되어 현재 이미지 생성 분야에서 높은 성능을 보여주고 있다. 본 연구에서 개발한 GAN 기반 모델은 기상청에서 제공된 2016년~2019년까지의 레이더 이미지 자료를 이용하여 초단기, 단기 강우예측을 수행하도록 학습시키고, 2020년 레이더 이미지 자료를 이용해 단기강우예측을 모의하였다. 또한, 기존 머신러닝 기법을 기반으로 한 모델들의 강우예측결과와 GAN 기반 모델의 강우예측결과를 비교분석한 결과, 본 연구를 통해 개발한 강우예측모델이 단기강우예측에 뛰어난 성능을 보이는 것을 확인할 수 있었다.

  • PDF

Layerwise Semantic Role Labeling in KRBERT (KRBERT 임베딩 층에 따른 의미역 결정)

  • Seo, Hye-Jin;Park, Myung-Kwan;Kim, Euhee
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.617-621
    • /
    • 2021
  • 의미역 결정은 문장 속에서 서술어와 그 논항의 관계를 파악하며, '누가, 무엇을, 어떻게, 왜' 등과 같은 의미역 관계를 찾아내는 자연어 처리 기법이다. 최근 수행되고 있는 의미역 결정 연구는 주로 말뭉치를 활용하여 딥러닝 학습을 하는 방식으로 연구가 이루어지고 있다. 최근 구글에서 개발한 사전 훈련된 Bidirectional Encoder Representations from Transformers (BERT) 모델이 다양한 자연어 처리 분야에서 상당히 높은 성능을 보이고 있다. 본 논문에서는 한국어 의미역 결정 성능 향상을 위해 한국어의 언어적 특징을 고려하며 사전 학습된 SNU KR-BERT를 사용하면서 한국어 의미역 결정 모델의 성능을 살펴보였다. 또한, 본 논문에서는 BERT 모델에서 과연 어떤 히든 레이어(hidden layer)에서 한국어 의미역 결정을 더 잘 수행하는지 알아보고자 하였다. 실험 결과 마지막 히든 레이어 임베딩을 활용하였을 때, 언어 모델의 성능은 66.4% 였다. 히든 레이어 별 언어 모델 성능을 비교한 결과, 마지막 4개의 히든 레이어를 이었을 때(concatenated), 언어 모델의 성능은 67.9% 이였으며, 11번째 히든 레이어를 사용했을 때는 68.1% 이였다. 즉, 마지막 히든 레이어를 선택했을 때보다 더 성능이 좋았다는 것을 알 수 있었다. 하지만 각 언어 모델 별 히트맵을 그려보았을 때는 마지막 히든 레이어 임베딩을 활용한 언어 모델이 더 정확히 의미역 판단을 한다는 것을 알 수 있었다.

  • PDF

Chart-based Stock Price Prediction by Combing Variation Autoencoder and Attention Mechanisms (변이형 오토인코더와 어텐션 메커니즘을 결합한 차트기반 주가 예측)

  • Sanghyun Bae;Byounggu Choi
    • Information Systems Review
    • /
    • v.23 no.1
    • /
    • pp.23-43
    • /
    • 2021
  • Recently, many studies have been conducted to increase the accuracy of stock price prediction by analyzing candlestick charts using artificial intelligence techniques. However, these studies failed to consider the time-series characteristics of candlestick charts and to take into account the emotional state of market participants in data learning for stock price prediction. In order to overcome these limitations, this study produced input data by combining volatility index and candlestick charts to consider the emotional state of market participants, and used the data as input for a new method proposed on the basis of combining variantion autoencoder (VAE) and attention mechanisms for considering the time-series characteristics of candlestick chart. Fifty firms were randomly selected from the S&P 500 index and their stock prices were predicted to evaluate the performance of the method compared with existing ones such as convolutional neural network (CNN) or long-short term memory (LSTM). The results indicated the method proposed in this study showed superior performance compared to the existing ones. This study implied that the accuracy of stock price prediction could be improved by considering the emotional state of market participants and the time-series characteristics of the candlestick chart.