• Title/Summary/Keyword: deep convolution neural network

Search Result 267, Processing Time 0.026 seconds

Object Tracking Algorithm based on Siamese Network with Local Overlap Confidence (지역 중첩 신뢰도가 적용된 샴 네트워크 기반 객체 추적 알고리즘)

  • Su-Chang Lim;Jong-Chan Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1109-1116
    • /
    • 2023
  • Object tracking is used to track a goal in a video sequence by using coordinate information provided as annotation in the first frame of the video. In this paper, we propose a tracking algorithm that combines deep features and region inference modules to improve object tracking accuracy. In order to obtain sufficient object information, a convolution neural network was designed with a Siamese network structure. For object region inference, the region proposal network and overlapping confidence module were applied and used for tracking. The performance of the proposed tracking algorithm was evaluated using the Object Tracking Benchmark dataset, and it achieved 69.1% in the Success index and 89.3% in the Precision Metrics.

CNN-based damage identification method of tied-arch bridge using spatial-spectral information

  • Duan, Yuanfeng;Chen, Qianyi;Zhang, Hongmei;Yun, Chung Bang;Wu, Sikai;Zhu, Qi
    • Smart Structures and Systems
    • /
    • v.23 no.5
    • /
    • pp.507-520
    • /
    • 2019
  • In the structural health monitoring field, damage detection has been commonly carried out based on the structural model and the engineering features related to the model. However, the extracted features are often subjected to various errors, which makes the pattern recognition for damage detection still challenging. In this study, an automated damage identification method is presented for hanger cables in a tied-arch bridge using a convolutional neural network (CNN). Raw measurement data for Fourier amplitude spectra (FAS) of acceleration responses are used without a complex data pre-processing for modal identification. A CNN is a kind of deep neural network that typically consists of convolution, pooling, and fully-connected layers. A numerical simulation study was performed for multiple damage detection in the hangers using ambient wind vibration data on the bridge deck. The results show that the current CNN using FAS data performs better under various damage states than the CNN using time-history data and the traditional neural network using FAS. Robustness of the present CNN has been proven under various observational noise levels and wind speeds.

RDNN: Rumor Detection Neural Network for Veracity Analysis in Social Media Text

  • SuthanthiraDevi, P;Karthika, S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3868-3888
    • /
    • 2022
  • A widely used social networking service like Twitter has the ability to disseminate information to large groups of people even during a pandemic. At the same time, it is a convenient medium to share irrelevant and unverified information online and poses a potential threat to society. In this research, conventional machine learning algorithms are analyzed to classify the data as either non-rumor data or rumor data. Machine learning techniques have limited tuning capability and make decisions based on their learning. To tackle this problem the authors propose a deep learning-based Rumor Detection Neural Network model to predict the rumor tweet in real-world events. This model comprises three layers, AttCNN layer is used to extract local and position invariant features from the data, AttBi-LSTM layer to extract important semantic or contextual information and HPOOL to combine the down sampling patches of the input feature maps from the average and maximum pooling layers. A dataset from Kaggle and ground dataset #gaja are used to train the proposed Rumor Detection Neural Network to determine the veracity of the rumor. The experimental results of the RDNN Classifier demonstrate an accuracy of 93.24% and 95.41% in identifying rumor tweets in real-time events.

A Deep Learning Model for Extracting Consumer Sentiments using Recurrent Neural Network Techniques

  • Ranjan, Roop;Daniel, AK
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.238-246
    • /
    • 2021
  • The rapid rise of the Internet and social media has resulted in a large number of text-based reviews being placed on sites such as social media. In the age of social media, utilizing machine learning technologies to analyze the emotional context of comments aids in the understanding of QoS for any product or service. The classification and analysis of user reviews aids in the improvement of QoS. (Quality of Services). Machine Learning algorithms have evolved into a powerful tool for analyzing user sentiment. Unlike traditional categorization models, which are based on a set of rules. In sentiment categorization, Bidirectional Long Short-Term Memory (BiLSTM) has shown significant results, and Convolution Neural Network (CNN) has shown promising results. Using convolutions and pooling layers, CNN can successfully extract local information. BiLSTM uses dual LSTM orientations to increase the amount of background knowledge available to deep learning models. The suggested hybrid model combines the benefits of these two deep learning-based algorithms. The data source for analysis and classification was user reviews of Indian Railway Services on Twitter. The suggested hybrid model uses the Keras Embedding technique as an input source. The suggested model takes in data and generates lower-dimensional characteristics that result in a categorization result. The suggested hybrid model's performance was compared using Keras and Word2Vec, and the proposed model showed a significant improvement in response with an accuracy of 95.19 percent.

Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification (비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.

Enhanced Stereo Matching Algorithm based on 3-Dimensional Convolutional Neural Network (3차원 합성곱 신경망 기반 향상된 스테레오 매칭 알고리즘)

  • Wang, Jian;Noh, Jackyou
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.179-186
    • /
    • 2021
  • For stereo matching based on deep learning, the design of network structure is crucial to the calculation of matching cost, and the time-consuming problem of convolutional neural network in image processing also needs to be solved urgently. In this paper, a method of stereo matching using sparse loss volume in parallax dimension is proposed. A sparse 3D loss volume is constructed by using a wide step length translation of the right view feature map, which reduces the video memory and computing resources required by the 3D convolution module by several times. In order to improve the accuracy of the algorithm, the nonlinear up-sampling of the matching loss in the parallax dimension is carried out by using the method of multi-category output, and the training model is combined with two kinds of loss functions. Compared with the benchmark algorithm, the proposed algorithm not only improves the accuracy but also shortens the running time by about 30%.

Multichannel Convolution Neural Network Classification for the Detection of Histological Pattern in Prostate Biopsy Images

  • Bhattacharjee, Subrata;Prakash, Deekshitha;Kim, Cho-Hee;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1486-1495
    • /
    • 2020
  • The analysis of digital microscopy images plays a vital role in computer-aided diagnosis (CAD) and prognosis. The main purpose of this paper is to develop a machine learning technique to predict the histological grades in prostate biopsy. To perform a multiclass classification, an AI-based deep learning algorithm, a multichannel convolutional neural network (MCCNN) was developed by connecting layers with artificial neurons inspired by the human brain system. The histological grades that were used for the analysis are benign, grade 3, grade 4, and grade 5. The proposed approach aims to classify multiple patterns of images extracted from the whole slide image (WSI) of a prostate biopsy based on the Gleason grading system. The Multichannel Convolution Neural Network (MCCNN) model takes three input channels (Red, Green, and Blue) to extract the computational features from each channel and concatenate them for multiclass classification. Stain normalization was carried out for each histological grade to standardize the intensity and contrast level in the image. The proposed model has been trained, validated, and tested with the histopathological images and has achieved an average accuracy of 96.4%, 94.6%, and 95.1%, respectively.

Prediction for Energy Demand Using 1D-CNN and Bidirectional LSTM in Internet of Energy (에너지인터넷에서 1D-CNN과 양방향 LSTM을 이용한 에너지 수요예측)

  • Jung, Ho Cheul;Sun, Young Ghyu;Lee, Donggu;Kim, Soo Hyun;Hwang, Yu Min;Sim, Issac;Oh, Sang Keun;Song, Seung-Ho;Kim, Jin Young
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.134-142
    • /
    • 2019
  • As the development of internet of energy (IoE) technologies and spread of various electronic devices have diversified patterns of energy consumption, the reliability of demand prediction has decreased, causing problems in optimization of power generation and stabilization of power supply. In this study, we propose a deep learning method, 1-Dimention-Convolution and Bidirectional Long Short-Term Memory (1D-ConvBLSTM), that combines a convolution neural network (CNN) and a Bidirectional Long Short-Term Memory(BLSTM) for highly reliable demand forecasting by effectively extracting the energy consumption pattern. In experimental results, the demand is predicted with the proposed deep learning method for various number of learning iterations and feature maps, and it is verified that the test data is predicted with a small number of iterations.

SKU-Net: Improved U-Net using Selective Kernel Convolution for Retinal Vessel Segmentation

  • Hwang, Dong-Hwan;Moon, Gwi-Seong;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.29-37
    • /
    • 2021
  • In this paper, we propose a deep learning-based retinal vessel segmentation model for handling multi-scale information of fundus images. we integrate the selective kernel convolution into U-Net-based convolutional neural network. The proposed model extracts and segment features information with various shapes and sizes of retinal blood vessels, which is important information for diagnosing eye-related diseases from fundus images. The proposed model consists of standard convolutions and selective kernel convolutions. While the standard convolutional layer extracts information through the same size kernel size, The selective kernel convolution extracts information from branches with various kernel sizes and combines them by adaptively adjusting them through split-attention. To evaluate the performance of the proposed model, we used the DRIVE and CHASE DB1 datasets and the proposed model showed F1 score of 82.91% and 81.71% on both datasets respectively, confirming that the proposed model is effective in segmenting retinal blood vessels.

EPS Gesture Signal Recognition using Deep Learning Model (심층 학습 모델을 이용한 EPS 동작 신호의 인식)

  • Lee, Yu ra;Kim, Soo Hyung;Kim, Young Chul;Na, In Seop
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.35-41
    • /
    • 2016
  • In this paper, we propose hand-gesture signal recognition based on EPS(Electronic Potential Sensor) using Deep learning model. Extracted signals which from Electronic field based sensor, EPS have much of the noise, so it must remove in pre-processing. After the noise are removed with filter using frequency feature, the signals are reconstructed with dimensional transformation to overcome limit which have just one-dimension feature with voltage value for using convolution operation. Then, the reconstructed signal data is finally classified and recognized using multiple learning layers model based on deep learning. Since the statistical model based on probability is sensitive to initial parameters, the result can change after training in modeling phase. Deep learning model can overcome this problem because of several layers in training phase. In experiment, we used two different deep learning structures, Convolutional neural networks and Recurrent Neural Network and compared with statistical model algorithm with four kinds of gestures. The recognition result of method using convolutional neural network is better than other algorithms in EPS gesture signal recognition.