• Title/Summary/Keyword: Kernel Discriminant

Search Result 32, Processing Time 0.027 seconds

Speaker Identification Using an Ensemble of Feature Enhancement Methods (특징 강화 방법의 앙상블을 이용한 화자 식별)

  • Yang, IL-Ho;Kim, Min-Seok;So, Byung-Min;Kim, Myung-Jae;Yu, Ha-Jin
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2011
  • In this paper, we propose an approach which constructs classifier ensembles of various channel compensation and feature enhancement methods. CMN and CMVN are used as channel compensation methods. PCA, kernel PCA, greedy kernel PCA, and kernel multimodal discriminant analysis are used as feature enhancement methods. The proposed ensemble system is constructed with the combination of 15 classifiers which include three channel compensation methods (including 'without compensation') and five feature enhancement methods (including 'without enhancement'). Experimental results show that the proposed ensemble system gives highest average speaker identification rate in various environments (channels, noises, and sessions).

  • PDF

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

Improvement in Supervector Linear Kernel SVM for Speaker Identification Using Feature Enhancement and Training Length Adjustment (특징 강화 기법과 학습 데이터 길이 조절에 의한 Supervector Linear Kernel SVM 화자식별 개선)

  • So, Byung-Min;Kim, Kyung-Wha;Kim, Min-Seok;Yang, Il-Ho;Kim, Myung-Jae;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.330-336
    • /
    • 2011
  • In this paper, we propose a new method to improve the performance of supervector linear kernel SVM (Support Vector Machine) for speaker identification. This method is based on splitting one training datum into several pieces of utterances. We use four different databases for evaluating performance and use PCA (Principal Component Analysis), GKPCA (Greedy Kernel PCA) and KMDA (Kernel Multimodal Discriminant Analysis) for feature enhancement. As a result, the proposed method shows improved performance for speaker identification using supervector linear kernel SVM.

Support Vector Bankruptcy Prediction Model with Optimal Choice of RBF Kernel Parameter Values using Grid Search (Support Vector Machine을 이용한 부도예측모형의 개발 -격자탐색을 이용한 커널 함수의 최적 모수 값 선정과 기존 부도예측모형과의 성과 비교-)

  • Min Jae H.;Lee Young-Chan
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.30 no.1
    • /
    • pp.55-74
    • /
    • 2005
  • Bankruptcy prediction has drawn a lot of research interests in previous literature, and recent studies have shown that machine learning techniques achieved better performance than traditional statistical ones. This paper employs a relatively new machine learning technique, support vector machines (SVMs). to bankruptcy prediction problem in an attempt to suggest a new model with better explanatory power and stability. To serve this purpose, we use grid search technique using 5-fold cross-validation to find out the optimal values of the parameters of kernel function of SVM. In addition, to evaluate the prediction accuracy of SVM. we compare its performance with multiple discriminant analysis (MDA), logistic regression analysis (Logit), and three-layer fully connected back-propagation neural networks (BPNs). The experiment results show that SVM outperforms the other methods.

A METHOD OF COMPUTING THE CONSTANT FIELD OBSTRUCTION TO THE HASSE PRINCIPLE FOR THE BRAUER GROUPS OF GENUS ONE CURVES

  • Han, Ilseop
    • Journal of the Korean Mathematical Society
    • /
    • v.53 no.6
    • /
    • pp.1431-1443
    • /
    • 2016
  • Let k be a global field of characteristic unequal to two. Let $C:y^2=f(x)$ be a nonsingular projective curve over k, where f(x) is a quartic polynomial over k with nonzero discriminant, and K = k(C) be the function field of C. For each prime spot p on k, let ${\hat{k}}_p$ denote the corresponding completion of k and ${\hat{k}}_p(C)$ the function field of $C{\times}_k{\hat{k}}_p$. Consider the map $$h:Br(K){\rightarrow}{\prod\limits_{\mathfrak{p}}}Br({\hat{k}}_p(C))$$, where p ranges over all the prime spots of k. In this paper, we explicitly describe all the constant classes (coming from Br(k)) lying in the kernel of the map h, which is an obstruction to the Hasse principle for the Brauer groups of the curve. The kernel of h can be expressed in terms of quaternion algebras with their prime spots. We also provide specific examples over ${\mathbb{Q}}$, the rationals, for this kernel.

Face recognition invariant to partial occlusions

  • Aisha, Azeem;Muhammad, Sharif;Hussain, Shah Jamal;Mudassar, Raza
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2496-2511
    • /
    • 2014
  • Face recognition is considered a complex biometrics in the field of image processing mainly due to the constraints imposed by variation in the appearance of facial images. These variations in appearance are affected by differences in expressions and/or occlusions (sunglasses, scarf etc.). This paper discusses incremental Kernel Fisher Discriminate Analysis on sub-classes for dealing with partial occlusions and variant expressions. This framework focuses on the division of classes into fixed size sub-classes for effective feature extraction. For this purpose, it modifies the traditional Linear Discriminant Analysis into incremental approach in the kernel space. Experiments are performed on AR, ORL, Yale B and MIT-CBCL face databases. The results show a significant improvement in face recognition.

Multi-focus Image Fusion Technique Based on Parzen-windows Estimates (Parzen 윈도우 추정에 기반한 다중 초점 이미지 융합 기법)

  • Atole, Ronnel R.;Park, Daechul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.75-88
    • /
    • 2008
  • This paper presents a spatial-level nonparametric multi-focus image fusion technique based on kernel estimates of input image blocks' underlying class-conditional probability density functions. Image fusion is approached as a classification task whose posterior class probabilities, P($wi{\mid}Bikl$), are calculated with likelihood density functions that are estimated from the training patterns. For each of the C input images Ii, the proposed method defines i classes wi and forms the fused image Z(k,l) from a decision map represented by a set of $P{\times}Q$ blocks Bikl whose features maximize the discriminant function based on the Bayesian decision principle. Performance of the proposed technique is evaluated in terms of RMSE and Mutual Information (MI) as the output quality measures. The width of the kernel functions, ${\sigma}$, were made to vary, and different kernels and block sizes were applied in performance evaluation. The proposed scheme is tested with C=2 and C=3 input images and results exhibited good performance.

  • PDF

Subject Independent Classification of Implicit Intention Based on EEG Signals

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.12 no.3
    • /
    • pp.12-16
    • /
    • 2016
  • Brain computer interfaces (BCI) usually have focused on classifying the explicitly-expressed intentions of humans. In contrast, implicit intentions should be considered to develop more intelligent systems. However, classifying implicit intention is more difficult than explicit intentions, and the difficulty severely increases for subject independent classification. In this paper, we address the subject independent classification of implicit intention based on electroencephalography (EEG) signals. Among many machine learning models, we use the support vector machine (SVM) with radial basis kernel functions to classify the EEG signals. The Fisher scores are evaluated after extracting the gamma, beta, alpha and theta band powers of the EEG signals from thirty electrodes. Since a more discriminant feature has a larger Fisher score value, the band powers of the EEG signals are presented to SVM based on the Fisher score. By training the SVM with 1-out of-9 validation, the best classification accuracy is approximately 65% with gamma and theta components.

Neural-Q method based on KFD regression (KFD 회귀를 이용한 뉴럴-큐 기법)

  • 조원희;김영일;박주영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.85-88
    • /
    • 2003
  • 강화학습의 한가지 방법인 Q-learning은 최근에 Linear Quadratic Regulation(이하 LQR) 문제에 성공적으로 적용된 바 있다. 특히, 시스템 모델의 파라미터에 대한 구체적인 정보없이 적절한 입ㆍ출력만으로 학습을 통해 문제의 해결이 가능하므로 상황에 따라 매우 실용적인 방법이 될 수 있다. 뉴럴-큐 기법은 이러한 Q-learning의 Q-value를 MLP(multilayer perceptron) 신경망의 출력으로 대치시켜, 비선형 시스템의 최적제어 문제를 다룰 수 있게 한 방법이다. 그러나, 뉴럴-큐 기법은 신경망의 구조를 먼저 결정한 후 역전파 알고리즘을 이용해 학습하는 절차를 행하므로, 시행착오를 통해 신경망 구조를 결정해야 한다는 점, 역전파 알고리즘의 적용에 따라 신경망의 연결강도 값들이 지역적 최적해로 수렴한다는 점등의 문제점이 있다. 본 논문에서는 뉴럴-큐 학습의 도구로 KFD회귀를 이용하여 Q 함수의 근사 기법을 제안하고 관련 수식을 유도하였다. 그리고, 모의 실험을 통하여, 제안된 뉴럴-큐 방법의 적용 가능성을 알아보았다.

  • PDF

Corporate credit rating prediction using support vector machines

  • Lee, Yong-Chan
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.571-578
    • /
    • 2005
  • Corporate credit rating analysis has drawn a lot of research interests in previous studies, and recent studies have shown that machine learning techniques achieved better performance than traditional statistical ones. This paper applies support vector machines (SVMs) to the corporate credit rating problem in an attempt to suggest a new model with better explanatory power and stability. To serve this purpose, the researcher uses a grid-search technique using 5-fold cross-validation to find out the optimal parameter values of kernel function of SVM. In addition, to evaluate the prediction accuracy of SVM, the researcher compares its performance with those of multiple discriminant analysis (MDA), case-based reasoning (CBR), and three-layer fully connected back-propagation neural networks (BPNs). The experiment results show that SVM outperforms the other methods.

  • PDF