• Title/Summary/Keyword: Discriminative feature

Search Result 95, Processing Time 0.02 seconds

Feature Vector Processing for Speech Emotion Recognition in Noisy Environments (잡음 환경에서의 음성 감정 인식을 위한 특징 벡터 처리)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.2 no.1
    • /
    • pp.77-85
    • /
    • 2010
  • This paper proposes an efficient feature vector processing technique to guard the Speech Emotion Recognition (SER) system against a variety of noises. In the proposed approach, emotional feature vectors are extracted from speech processed by comb filtering. Then, these extracts are used in a robust model construction based on feature vector classification. We modify conventional comb filtering by using speech presence probability to minimize drawbacks due to incorrect pitch estimation under background noise conditions. The modified comb filtering can correctly enhance the harmonics, which is an important factor used in SER. Feature vector classification technique categorizes feature vectors into either discriminative vectors or non-discriminative vectors based on a log-likelihood criterion. This method can successfully select the discriminative vectors while preserving correct emotional characteristics. Thus, robust emotion models can be constructed by only using such discriminative vectors. On SER experiment using an emotional speech corpus contaminated by various noises, our approach exhibited superior performance to the baseline system.

  • PDF

Discriminative Manifold Learning Network using Adversarial Examples for Image Classification

  • Zhang, Yuan;Shi, Biming
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2099-2106
    • /
    • 2018
  • This study presents a novel approach of discriminative feature vectors based on manifold learning using nonlinear dimension reduction (DR) technique to improve loss function, and combine with the Adversarial examples to regularize the object function for image classification. The traditional convolutional neural networks (CNN) with many new regularization approach has been successfully used for image classification tasks, and it achieved good results, hence it costs a lot of Calculated spacing and timing. Significantly, distrinct from traditional CNN, we discriminate the feature vectors for objects without empirically-tuned parameter, these Discriminative features intend to remain the lower-dimensional relationship corresponding high-dimension manifold after projecting the image feature vectors from high-dimension to lower-dimension, and we optimize the constrains of the preserving local features based on manifold, which narrow the mapped feature information from the same class and push different class away. Using Adversarial examples, improved loss function with additional regularization term intends to boost the Robustness and generalization of neural network. experimental results indicate that the approach based on discriminative feature of manifold learning is not only valid, but also more efficient in image classification tasks. Furthermore, the proposed approach achieves competitive classification performances for three benchmark datasets : MNIST, CIFAR-10, SVHN.

Reinforced Feature of Dynamic Search Area for the Discriminative Model Prediction Tracker based on Multi-domain Dataset (다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법)

  • Lee, Jun Ha;Won, Hong-In;Kim, Byeong Hak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.323-330
    • /
    • 2021
  • Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.

Construction of Composite Feature Vector Based on Discriminant Analysis for Face Recognition (얼굴인식을 위한 판별분석에 기반한 복합특징 벡터 구성 방법)

  • Choi, Sang-Il
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.7
    • /
    • pp.834-842
    • /
    • 2015
  • We propose a method to construct composite feature vector based on discriminant analysis for face recognition. For this, we first extract the holistic- and local-features from whole face images and local images, which consist of the discriminant pixels, by using a discriminant feature extraction method. In order to utilize both advantages of holistic- and local-features, we evaluate the amount of the discriminative information in each feature and then construct a composite feature vector with only the features that contain a large amount of discriminative information. The experimental results for the FERET, CMU-PIE and Yale B databases show that the proposed composite feature vector has improvement of face recognition performance.

Minimum Classification Error Training to Improve Discriminability of PCMM-Based Feature Compensation (PCMM 기반 특징 보상 기법에서 변별력 향상을 위한 Minimum Classification Error 훈련의 적용)

  • Kim Wooil;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.58-68
    • /
    • 2005
  • In this paper, we propose a scheme to improve discriminative property in the feature compensation method for robust speech recognition under noisy environments. The estimation of noisy speech model used in existing feature compensation methods do not guarantee the computation of posterior probabilities which discriminate reliably among the Gaussian components. Estimation of Posterior probabilities is a crucial step in determining the discriminative factor of the Gaussian models, which in turn determines the intelligibility of the restored speech signals. The proposed scheme employs minimum classification error (MCE) training for estimating the parameters of the noisy speech model. For applying the MCE training, we propose to identify and determine the 'competing components' that are expected to affect the discriminative ability. The proposed method is applied to feature compensation based on parallel combined mixture model (PCMM). The performance is examined over Aurora 2.0 database and over the speech recorded inside a car during real driving conditions. The experimental results show improved recognition performance in both simulated environments and real-life conditions. The result verifies the effectiveness of the proposed scheme for increasing the performance of robust speech recognition systems.

Discriminative Feature Vector Selection for Emotion Classification Based on Speech (음성신호기반의 감정분석을 위한 특징벡터 선택)

  • Choi, Ha-Na;Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.9
    • /
    • pp.1363-1368
    • /
    • 2015
  • Recently, computer form were smaller than before because of computing technique's development and many wearable device are formed. So, computer's cognition of human emotion has importantly considered, thus researches on analyzing the state of emotion are increasing. Human voice includes many information of human emotion. This paper proposes a discriminative feature vector selection for emotion classification based on speech. For this, we extract some feature vectors like Pitch, MFCC, LPC, LPCC from voice signals are divided into four emotion parts on happy, normal, sad, angry and compare a separability of the extracted feature vectors using Bhattacharyya distance. So more effective feature vectors are recommended for emotion classification.

Combined Features with Global and Local Features for Gas Classification

  • Choi, Sang-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.9
    • /
    • pp.11-18
    • /
    • 2016
  • In this paper, we propose a gas classification method using combined features for an electronic nose system that performs well even when some loss occurs in measuring data samples. We first divide the entire measurement for a data sample into three local sections, which are the stabilization, exposure, and purge; local features are then extracted from each section. Based on the discrimination analysis, measurements of the discriminative information amounts are taken. Subsequently, the local features that have a large amount of discriminative information are chosen to compose the combined features together with the global features that extracted from the entire measurement section of the data sample. The experimental results show that the combined features by the proposed method gives better classification performance for a variety of volatile organic compound data than the other feature types, especially when there is data loss.

Discriminative and Non-User Specific Binary Biometric Representation via Linearly-Separable SubCode Encoding-based Discretization

  • Lim, Meng-Hui;Teoh, Andrew Beng Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.374-388
    • /
    • 2011
  • Biometric discretization is a process of transforming continuous biometric features of an identity into a binary bit string. This paper mainly focuses on improving the global discretization method - a discretization method that does not base on information specific to each user in bitstring extraction, which appears to be important in applications that prioritize strong security provision and strong privacy protection. In particular, we demonstrate how the actual performance of a global discretization could further be improved by embedding a global discriminative feature selection method and a Linearly Separable Subcode-based encoding technique. In addition, we examine a number of discriminative feature selection measures that can reliably be used for such discretization. Lastly, encouraging empirical results vindicate the feasibility of our approach.

Evaluation of HOG-Family Features for Human Detection using PCA-SVM (PCA-SVM을 이용한 Human Detection을 위한 HOG-Family 특징 비교)

  • Setiawan, Nurul Arif;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.504-509
    • /
    • 2008
  • Support Vector Machine (SVM) is one of powerful learning machine and has been applied to varying task with generally acceptable performance. The success of SVM for classification tasks in one domain is affected by features which represent the instance of specific class. Given the representative and discriminative features, SVM learning will give good generalization and consequently we can obtain good classifier. In this paper, we will assess the problem of feature choices for human detection tasks and measure the performance of each feature. Here we will consider HOG-family feature. As a natural extension of SVM, we combine SVM with Principal Component Analysis (PCA) to reduce dimension of features while retaining most of discriminative feature vectors.

  • PDF

An Extended Generative Feature Learning Algorithm for Image Recognition

  • Wang, Bin;Li, Chuanjiang;Zhang, Qian;Huang, Jifeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3984-4005
    • /
    • 2017
  • Image recognition has become an increasingly important topic for its wide application. It is highly challenging when facing to large-scale database with large variance. The recognition systems rely on a key component, i.e. the low-level feature or the learned mid-level feature. The recognition performance can be potentially improved if the data distribution information is exploited using a more sophisticated way, which usually a function over hidden variable, model parameter and observed data. These methods are called generative score space. In this paper, we propose a discriminative extension for the existing generative score space methods, which exploits class label when deriving score functions for image recognition task. Specifically, we first extend the regular generative models to class conditional models over both observed variable and class label. Then, we derive the mid-level feature mapping from the extended models. At last, the derived feature mapping is embedded into a discriminative classifier for image recognition. The advantages of our proposed approach are two folds. First, the resulted methods take simple and intuitive forms which are weighted versions of existing methods, benefitting from the Bayesian inference of class label. Second, the probabilistic generative modeling allows us to exploit hidden information and is well adapt to data distribution. To validate the effectiveness of the proposed method, we cooperate our discriminative extension with three generative models for image recognition task. The experimental results validate the effectiveness of our proposed approach.