• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.026 seconds

Deep learning-based sensor fault detection using S-Long Short Term Memory Networks

  • Li, Lili;Liu, Gang;Zhang, Liangliang;Li, Qing
    • Structural Monitoring and Maintenance
    • /
    • v.5 no.1
    • /
    • pp.51-65
    • /
    • 2018
  • A number of sensing techniques have been implemented for detecting defects in civil infrastructures instead of onsite human inspections in structural health monitoring. However, the issue of faults in sensors has not received much attention. This issue may lead to incorrect interpretation of data and false alarms. To overcome these challenges, this article presents a deep learning-based method with a new architecture of Stateful Long Short Term Memory Neural Networks (S-LSTM NN) for detecting sensor fault without going into details of the fault features. As LSTMs are capable of learning data features automatically, and the proposed method works without an accurate mathematical model. The detection of four types of sensor faults are studied in this paper. Non-stationary acceleration responses of a three-span continuous bridge when under operational conditions are studied. A deep network model is applied to the measured bridge data with estimation to detect the sensor fault. Another set of sensor output data is used to supervise the network parameters and backpropagation algorithm to fine tune the parameters to establish a deep self-coding network model. The response residuals between the true value and the predicted value of the deep S-LSTM network was statistically analyzed to determine the fault threshold of sensor. Experimental study with a cable-stayed bridge further indicated that the proposed method is robust in the detection of the sensor fault.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Human Gait Recognition Based on Spatio-Temporal Deep Convolutional Neural Network for Identification

  • Zhang, Ning;Park, Jin-ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.927-939
    • /
    • 2020
  • Gait recognition can identify people's identity from a long distance, which is very important for improving the intelligence of the monitoring system. Among many human features, gait features have the advantages of being remotely available, robust, and secure. Traditional gait feature extraction, affected by the development of behavior recognition, can only rely on manual feature extraction, which cannot meet the needs of fine gait recognition. The emergence of deep convolutional neural networks has made researchers get rid of complex feature design engineering, and can automatically learn available features through data, which has been widely used. In this paper,conduct feature metric learning in the three-dimensional space by combining the three-dimensional convolution features of the gait sequence and the Siamese structure. This method can capture the information of spatial dimension and time dimension from the continuous periodic gait sequence, and further improve the accuracy and practicability of gait recognition.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

In-depth Recommendation Model Based on Self-Attention Factorization

  • Hongshuang Ma;Qicheng Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.721-739
    • /
    • 2023
  • Rating prediction is an important issue in recommender systems, and its accuracy affects the experience of the user and the revenue of the company. Traditional recommender systems use Factorization Machinesfor rating predictions and each feature is selected with the same weight. Thus, there are problems with inaccurate ratings and limited data representation. This study proposes a deep recommendation model based on self-attention Factorization (SAFMR) to solve these problems. This model uses Convolutional Neural Networks to extract features from user and item reviews. The obtained features are fed into self-attention mechanism Factorization Machines, where the self-attention network automatically learns the dependencies of the features and distinguishes the weights of the different features, thereby reducing the prediction error. The model was experimentally evaluated using six classes of dataset. We compared MSE, NDCG and time for several real datasets. The experiment demonstrated that the SAFMR model achieved excellent rating prediction results and recommendation correlations, thereby verifying the effectiveness of the model.

A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning (트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용)

  • Woo, Deock-Chae;Moon, Hyun Sil;Kwon, Suhnbeom;Cho, Yoonho
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

Optimization of Cyber-Attack Detection Using the Deep Learning Network

  • Duong, Lai Van
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.159-168
    • /
    • 2021
  • Detecting cyber-attacks using machine learning or deep learning is being studied and applied widely in network intrusion detection systems. We noticed that the application of deep learning algorithms yielded many good results. However, because each deep learning model has different architecture and characteristics with certain advantages and disadvantages, so those deep learning models are only suitable for specific datasets or features. In this paper, in order to optimize the process of detecting cyber-attacks, we propose the idea of building a new deep learning network model based on the association and combination of individual deep learning models. In particular, based on the architecture of 2 deep learning models: Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM), we combine them into a combined deep learning network for detecting cyber-attacks based on network traffic. The experimental results in Section IV.D have demonstrated that our proposal using the CNN-LSTM deep learning model for detecting cyber-attacks based on network traffic is completely correct because the results of this model are much better than some individual deep learning models on all measures.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Two-stage Deep Learning Model with LSTM-based Autoencoder and CNN for Crop Classification Using Multi-temporal Remote Sensing Images

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.719-731
    • /
    • 2021
  • This study proposes a two-stage hybrid classification model for crop classification using multi-temporal remote sensing images; the model combines feature embedding by using an autoencoder (AE) with a convolutional neural network (CNN) classifier to fully utilize features including informative temporal and spatial signatures. Long short-term memory (LSTM)-based AE (LAE) is fine-tuned using class label information to extract latent features that contain less noise and useful temporal signatures. The CNN classifier is then applied to effectively account for the spatial characteristics of the extracted latent features. A crop classification experiment with multi-temporal unmanned aerial vehicle images is conducted to illustrate the potential application of the proposed hybrid model. The classification performance of the proposed model is compared with various combinations of conventional deep learning models (CNN, LSTM, and convolutional LSTM) and different inputs (original multi-temporal images and features from stacked AE). From the crop classification experiment, the best classification accuracy was achieved by the proposed model that utilized the latent features by fine-tuned LAE as input for the CNN classifier. The latent features that contain useful temporal signatures and are less noisy could increase the class separability between crops with similar spectral signatures, thereby leading to superior classification accuracy. The experimental results demonstrate the importance of effective feature extraction and the potential of the proposed classification model for crop classification using multi-temporal remote sensing images.

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum;Jeong, Hye Won;Choi, Chang Kyun;Ali, Muhammad Salman;Bae, Sung-Ho;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.868-876
    • /
    • 2021
  • Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.