• Title/Summary/Keyword: Feature Vector Fusion

Search Result 37, Processing Time 0.02 seconds

Language Identification by Fusion of Gabor, MDLC, and Co-Occurrence Features (Gabor, MDLC, Co-Occurrence 특징의 융합에 의한 언어 인식)

  • Jang, Ick-Hoon;Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.277-286
    • /
    • 2014
  • In this paper, we propose a texture feature-based language identification by fusion of Gabor, MDLC (multi-lag directional local correlation), and co-occurrence features. In the proposed method, for a test image, Gabor magnitude images are first obtained by Gabor transform followed by magnitude operator. Moments for the Gabor magniude images are then computed and vectorized. MDLC images are then obtained by MDLC operator and their moments are computed and vectorized. GLCM (gray-level co-occurrence matrix) is next calculated from the test image and co-occurrence features are computed using the GLCM, and the features are also vectorized. The three vectors of the Gabor, MDLC, and co-occurrence features are fused into a feature vector. In classification, the WPCA (whitened principal component analysis) classifier, which is usually adopted in the face identification, searches the training feature vector most similar to the test feature vector. We evaluate the performance of our method by examining averaged identification rates for a test document image DB obtained by scanning of documents with 15 languages. Experimental results show that the proposed method yields excellent language identification with rather low feature dimension for the test DB.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Vocal Effort Detection Based on Spectral Information Entropy Feature and Model Fusion

  • Chao, Hao;Lu, Bao-Yun;Liu, Yong-Li;Zhi, Hui-Lai
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.218-227
    • /
    • 2018
  • Vocal effort detection is important for both robust speech recognition and speaker recognition. In this paper, the spectral information entropy feature which contains more salient information regarding the vocal effort level is firstly proposed. Then, the model fusion method based on complementary model is presented to recognize vocal effort level. Experiments are conducted on isolated words test set, and the results show the spectral information entropy has the best performance among the three kinds of features. Meanwhile, the recognition accuracy of all vocal effort levels reaches 81.6%. Thus, potential of the proposed method is demonstrated.

Feature Extraction Based on DBN-SVM for Tone Recognition

  • Chao, Hao;Song, Cheng;Lu, Bao-Yun;Liu, Yong-Li
    • Journal of Information Processing Systems
    • /
    • v.15 no.1
    • /
    • pp.91-99
    • /
    • 2019
  • An innovative tone modeling framework based on deep neural networks in tone recognition was proposed in this paper. In the framework, both the prosodic features and the articulatory features were firstly extracted as the raw input data. Then, a 5-layer-deep deep belief network was presented to obtain high-level tone features. Finally, support vector machine was trained to recognize tones. The 863-data corpus had been applied in experiments, and the results show that the proposed method helped improve the recognition accuracy significantly for all tone patterns. Meanwhile, the average tone recognition rate reached 83.03%, which is 8.61% higher than that of the original method.

Feature information fusion using multiple neural networks and target identification application of FLIR image (다중 신경회로망을 이용한 특징정보 융합과 적외선영상에서의 표적식별에의 응용)

  • 선선구;박현욱
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.4
    • /
    • pp.266-274
    • /
    • 2003
  • Distance Fourier descriptors of local target boundary and feature information fusion using multiple MLPs (Multilayer perceptrons) are proposed. They are used to identify nonoccluded and partially occluded targets in natural FLIR (forward-looking infrared) images. After segmenting a target, radial Fourier descriptors as global shape features are defined from the target boundary. A target boundary is partitioned into four local boundaries to extract local shape features. In a local boundary, a distance function is defined from boundary points and a line between two extreme points. Distance Fourier descriptors as local shape features are defined by using distance function. One global feature vector and four local feature vectors are used as input data for multiple MLPs to determine final identification result of the target. In the experiments, we show that the proposed method is superior to the traditional feature sets with respect to the identification performance.

Efficient Recognition Method for Ballistic Warheads by the Fusion of Feature Vectors Based on Flight Phase (비행 단계별 특성벡터 융합을 통한 효과적인 탄두 식별방법)

  • Choi, In-Oh;Kim, Si-Ho;Jung, Joo-Ho;Kim, Kyung-Tae;Park, Sang-Hong
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.30 no.6
    • /
    • pp.487-497
    • /
    • 2019
  • It is very difficult to detect ballistic missiles because of small cross-sections of the radar and the high maneuverability of the missiles. In addition, it is very difficult to recognize and intercept warheads because of the existence of debris and decoy with similar motion parameters in each flight phase. Therefore, feature vectors based on the maneuver, the micro-motion according to flight phase are needed, and the two types of features must be fused for the efficient recognition of ballistic warhead regardless of the flight phase. In this paper, we introduce feature vectors appropriate for each flight phase and an effective method to fuse them at the feature vector-level and classifier-level. According to the classification simulations using the radar signals predicted by the CAD models, the closer the warhead was to the final destination, the more improved was the classification performance. This was achieved by the classifier-level fusion, regardless of the flight phase in a noisy environment.

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

Using Keystroke Dynamics for Implicit Authentication on Smartphone

  • Do, Son;Hoang, Thang;Luong, Chuyen;Choi, Seungchan;Lee, Dokyeong;Bang, Kihyun;Choi, Deokjai
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.968-976
    • /
    • 2014
  • Authentication methods on smartphone are demanded to be implicit to users with minimum users' interaction. Existing authentication methods (e.g. PINs, passwords, visual patterns, etc.) are not effectively considering remembrance and privacy issues. Behavioral biometrics such as keystroke dynamics and gait biometrics can be acquired easily and implicitly by using integrated sensors on smartphone. We propose a biometric model involving keystroke dynamics for implicit authentication on smartphone. We first design a feature extraction method for keystroke dynamics. And then, we build a fusion model of keystroke dynamics and gait to improve the authentication performance of single behavioral biometric on smartphone. We operate the fusion at both feature extraction level and matching score level. Experiment using linear Support Vector Machines (SVM) classifier reveals that the best results are achieved with score fusion: a recognition rate approximately 97.86% under identification mode and an error rate approximately 1.11% under authentication mode.

Multiple Properties-Based Moving Object Detection Algorithm

  • Zhou, Changjian;Xing, Jinge;Liu, Haibo
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.124-135
    • /
    • 2021
  • Object detection is a fundamental yet challenging task in computer vision that plays an important role in object recognition, tracking, scene analysis and understanding. This paper aims to propose a multiproperty fusion algorithm for moving object detection. First, we build a scale-invariant feature transform (SIFT) vector field and analyze vectors in the SIFT vector field to divide vectors in the SIFT vector field into different classes. Second, the distance of each class is calculated by dispersion analysis. Next, the target and contour can be extracted, and then we segment the different images, reversal process and carry on morphological processing, the moving objects can be detected. The experimental results have good stability, accuracy and efficiency.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.