• Title/Summary/Keyword: Convolutional recurrent neural network

Search Result 90, Processing Time 0.024 seconds

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Research for Radar Signal Classification Model Using Deep Learning Technique (딥 러닝 기법을 이용한 레이더 신호 분류 모델 연구)

  • Kim, Yongjun;Yu, Kihun;Han, Jinwoo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.170-178
    • /
    • 2019
  • Classification of radar signals in the field of electronic warfare is a problem of discriminating threat types by analyzing enemy threat radar signals such as aircraft, radar, and missile received through electronic warfare equipment. Recent radar systems have adopted a variety of modulation schemes that are different from those used in conventional systems, and are often difficult to analyze using existing algorithms. Also, it is necessary to design a robust algorithm for the signal received in the real environment due to the environmental influence and the measurement error due to the characteristics of the hardware. In this paper, we propose a radar signal classification method which are not affected by radar signal modulation methods and noise generation by using deep learning techniques.

Development of a Hybrid Deep-Learning Model for the Human Activity Recognition based on the Wristband Accelerometer Signals

  • Jeong, Seungmin;Oh, Dongik
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.9-16
    • /
    • 2021
  • This study aims to develop a human activity recognition (HAR) system as a Deep-Learning (DL) classification model, distinguishing various human activities. We solely rely on the signals from a wristband accelerometer worn by a person for the user's convenience. 3-axis sequential acceleration signal data are gathered within a predefined time-window-slice, and they are used as input to the classification system. We are particularly interested in developing a Deep-Learning model that can outperform conventional machine learning classification performance. A total of 13 activities based on the laboratory experiments' data are used for the initial performance comparison. We have improved classification performance using the Convolutional Neural Network (CNN) combined with an auto-encoder feature reduction and parameter tuning. With various publically available HAR datasets, we could also achieve significant improvement in HAR classification. Our CNN model is also compared against Recurrent-Neural-Network(RNN) with Long Short-Term Memory(LSTM) to demonstrate its superiority. Noticeably, our model could distinguish both general activities and near-identical activities such as sitting down on the chair and floor, with almost perfect classification accuracy.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Analysis of streamflow prediction performance by various deep learning schemes

  • Le, Xuan-Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.131-131
    • /
    • 2021
  • Deep learning models, especially those based on long short-term memory (LSTM), have presented their superiority in addressing time series data issues recently. This study aims to comprehensively evaluate the performance of deep learning models that belong to the supervised learning category in streamflow prediction. Therefore, six deep learning models-standard LSTM, standard gated recurrent unit (GRU), stacked LSTM, bidirectional LSTM (BiLSTM), feed-forward neural network (FFNN), and convolutional neural network (CNN) models-were of interest in this study. The Red River system, one of the largest river basins in Vietnam, was adopted as a case study. In addition, deep learning models were designed to forecast flowrate for one- and two-day ahead at Son Tay hydrological station on the Red River using a series of observed flowrate data at seven hydrological stations on three major river branches of the Red River system-Thao River, Da River, and Lo River-as the input data for training, validation, and testing. The comparison results have indicated that the four LSTM-based models exhibit significantly better performance and maintain stability than the FFNN and CNN models. Moreover, LSTM-based models may reach impressive predictions even in the presence of upstream reservoirs and dams. In the case of the stacked LSTM and BiLSTM models, the complexity of these models is not accompanied by performance improvement because their respective performance is not higher than the two standard models (LSTM and GRU). As a result, we realized that in the context of hydrological forecasting problems, simple architectural models such as LSTM and GRU (with one hidden layer) are sufficient to produce highly reliable forecasts while minimizing computation time because of the sequential data nature.

  • PDF

Deep Learning in Thyroid Ultrasonography to Predict Tumor Recurrence in Thyroid Cancers (인공지능 딥러닝을 이용한 갑상선 초음파에서의 갑상선암의 재발 예측)

  • Jieun Kil;Kwang Gi Kim;Young Jae Kim;Hye Ryoung Koo;Jeong Seon Park
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.5
    • /
    • pp.1164-1174
    • /
    • 2020
  • Purpose To evaluate a deep learning model to predict recurrence of thyroid tumor using preoperative ultrasonography (US). Materials and Methods We included representative images from 229 US-based patients (male:female = 42:187; mean age, 49.6 years) who had been diagnosed with thyroid cancer on preoperative US and subsequently underwent thyroid surgery. After selecting each representative transverse or longitudinal US image, we created a data set from the resulting database of 898 images after augmentation. The Python 2.7.6 and Keras 2.1.5 framework for neural networks were used for deep learning with a convolutional neural network. We compared the clinical and histological features between patients with and without recurrence. The predictive performance of the deep learning model between groups was evaluated using receiver operating characteristic (ROC) analysis, and the area under the ROC curve served as a summary of the prognostic performance of the deep learning model to predict recurrent thyroid cancer. Results Tumor recurrence was noted in 49 (21.4%) among the 229 patients. Tumor size and multifocality varied significantly between the groups with and without recurrence (p < 0.05). The overall mean area under the curve (AUC) value of the deep learning model for prediction of recurrent thyroid cancer was 0.9 ± 0.06. The mean AUC value was 0.87 ± 0.03 in macrocarcinoma and 0.79 ± 0.16 in microcarcinoma. Conclusion A deep learning model for analysis of US images of thyroid cancer showed the possibility of predicting recurrence of thyroid cancer.

Feature Extraction of CNN-GRU based Multivariate Time Series Data for Regional Clustering (지역 군집화를 위한 CNN-GRU 기반 다변량 시계열 데이터의 특성 추출)

  • Kim, Jinah;Lee, Ji-Hoon;Choi, Dong-Wook;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.950-951
    • /
    • 2019
  • 시계열 데이터에 대한 군집화 관련 연구는 주로 통계 분석을 통해 이뤄지기 때문에 데이터가 갖는 특성을 완전히 반영하는 데 한계를 갖는다. 본 논문에서는 다변량 데이터에서의 군집화를 위하여 변수별로 시간에 따른 변화와 특징을 추출하기 위한 CNN-GRU(Convolutional Neural Network - Gated Recurrent Unit) 기반의 신경망 모델을 제안한다. CNN을 활용하여 변수별로 갖는 특성을 파악하고자 하였으며, GRU을 통해 전체 시간에 따른 소비 추세를 도출하고자 하였다. 지역별로 업종에 따라 사용된 2년 치의 실제 카드 데이터를 활용하였으며, 유사한 소비 추세를 보이는 지역을 군집화하는데 이를 적용하였다. 결과적으로, 다변량 시계열 데이터를 통해 전체적인 흐름을 반영하여 패턴화했다는 점에서 의의를 갖는다.

Research Paper Classification Scheme based on CNN with LSTM and GRU (CNN과 LSTM 및 GRU 기반 연구 논문 분류 시스템의 설계 및 구현)

  • Dipto, Biswas;Kang, Jihun;Gil, Joon-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.612-614
    • /
    • 2022
  • 최근 딥러닝 기술은 자연어처리에서 기본적이고 필수적인 기법으로 자연어처리에 필요한 복잡한 비선형 관계를 모델링할 수 있다. 본 논문에서는 LSTM(Long Short-Term Memory)과 GRU(Gated Recurrent Unit) 딥러닝 기술을 연구 논문 분류에 적용하며, CNN(Convolutional Neural Network)에 LSTM과 GRU을 각각 결합하여 특정 분야의 연구 논문을 분류하고 연구 논문을 추천하는 기법을 제안한다. 워드 임베딩과 딥러닝 기법을 연구 논문 분류에 적용하여 관심이 있는 단어와 단어 주변의 단어들 사이의 유사성과 성능을 비교 분석한다.

Anomaly Detection In Real Power Plant Vibration Data by MSCRED Base Model Improved By Subset Sampling Validation (Subset 샘플링 검증 기법을 활용한 MSCRED 모델 기반 발전소 진동 데이터의 이상 진단)

  • Hong, Su-Woong;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.31-38
    • /
    • 2022
  • This paper applies an expert independent unsupervised neural network learning-based multivariate time series data analysis model, MSCRED(Multi-Scale Convolutional Recurrent Encoder-Decoder), and to overcome the limitation, because the MCRED is based on Auto-encoder model, that train data must not to be contaminated, by using learning data sampling technique, called Subset Sampling Validation. By using the vibration data of power plant equipment that has been labeled, the classification performance of MSCRED is evaluated with the Anomaly Score in many cases, 1) the abnormal data is mixed with the training data 2) when the abnormal data is removed from the training data in case 1. Through this, this paper presents an expert-independent anomaly diagnosis framework that is strong against error data, and presents a concise and accurate solution in various fields of multivariate time series data.

Comparison of Anomaly Detection Performance Based on GRU Model Applying Various Data Preprocessing Techniques and Data Oversampling (다양한 데이터 전처리 기법과 데이터 오버샘플링을 적용한 GRU 모델 기반 이상 탐지 성능 비교)

  • Yoo, Seung-Tae;Kim, Kangseok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.201-211
    • /
    • 2022
  • According to the recent change in the cybersecurity paradigm, research on anomaly detection methods using machine learning and deep learning techniques, which are AI implementation technologies, is increasing. In this study, a comparative study on data preprocessing techniques that can improve the anomaly detection performance of a GRU (Gated Recurrent Unit) neural network-based intrusion detection model using NGIDS-DS (Next Generation IDS Dataset), an open dataset, was conducted. In addition, in order to solve the class imbalance problem according to the ratio of normal data and attack data, the detection performance according to the oversampling ratio was compared and analyzed using the oversampling technique applied with DCGAN (Deep Convolutional Generative Adversarial Networks). As a result of the experiment, the method preprocessed using the Doc2Vec algorithm for system call feature and process execution path feature showed good performance, and in the case of oversampling performance, when DCGAN was used, improved detection performance was shown.