• Title/Summary/Keyword: kernel-based method

Search Result 474, Processing Time 0.026 seconds

New Fuzzy Inference System Using a Kernel-based Method

  • Kim, Jong-Cheol;Won, Sang-Chul;Suga, Yasuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2393-2398
    • /
    • 2003
  • In this paper, we proposes a new fuzzy inference system for modeling nonlinear systems given input and output data. In the suggested fuzzy inference system, the number of fuzzy rules and parameter values of membership functions are automatically decided by using the kernel-based method. The kernel-based method individually performs linear transformation and kernel mapping. Linear transformation projects input space into linearly transformed input space. Kernel mapping projects linearly transformed input space into high dimensional feature space. The structure of the proposed fuzzy inference system is equal to a Takagi-Sugeno fuzzy model whose input variables are weighted linear combinations of input variables. In addition, the number of fuzzy rules can be reduced under the condition of optimizing a given criterion by adjusting linear transformation matrix and parameter values of kernel functions using the gradient descent method. Once a structure is selected, coefficients in consequent part are determined by the least square method. Simulated result illustrates the effectiveness of the proposed technique.

  • PDF

Nonparametric Kernel Regression Function Estimation with Bootstrap Method

  • Kim, Dae-Hak
    • Journal of the Korean Statistical Society
    • /
    • v.22 no.2
    • /
    • pp.361-368
    • /
    • 1993
  • In recent years, kernel type estimates are abundant. In this paper, we propose a bandwidth selection method for kernel regression of fixed design based on bootstrap procedure. Mathematical properties of proposed bootstrap-based bandwidth selection method are discussed. Performance of the proposed method for small sample case is compared with that of cross-validation method via a simulation study.

  • PDF

A study on convergence and complexity of reproducing kernel collocation method

  • Hu, Hsin-Yun;Lai, Chiu-Kai;Chen, Jiun-Shyan
    • Interaction and multiscale mechanics
    • /
    • v.2 no.3
    • /
    • pp.295-319
    • /
    • 2009
  • In this work, we discuss a reproducing kernel collocation method (RKCM) for solving $2^{nd}$ order PDE based on strong formulation, where the reproducing kernel shape functions with compact support are used as approximation functions. The method based on strong form collocation avoids the domain integration, and leads to well-conditioned discrete system of equations. We investigate the convergence and the computational complexity for this proposed method. An important result obtained from the analysis is that the degree of basis in the reproducing kernel approximation has to be greater than one for the method to converge. Some numerical experiments are provided to validate the error analysis. The complexity of RKCM is also analyzed, and the complexity comparison with the weak formulation using reproducing kernel approximation is presented.

An improved kernel principal component analysis based on sparse representation for face recognition

  • Huang, Wei;Wang, Xiaohui;Zhu, Yinghui;Zheng, Gengzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2709-2729
    • /
    • 2016
  • Representation based classification, kernel method and sparse representation have received much attention in the field of face recognition. In this paper, we proposed an improved kernel principal component analysis method based on sparse representation to improve the accuracy and robustness for face recognition. First, the distances between the test sample and all training samples in kernel space are estimated based on collaborative representation. Second, S training samples with the smallest distances are selected, and Kernel Principal Component Analysis (KPCA) is used to extract the features that are exploited for classification. The proposed method implements the sparse representation under ℓ2 regularization and performs feature extraction twice to improve the robustness. Also, we investigate the relationship between the accuracy and the sparseness coefficient, the relationship between the accuracy and the dimensionality respectively. The comparative experiments are conducted on the ORL, the GT and the UMIST face database. The experimental results show that the proposed method is more effective and robust than several state-of-the-art methods including Sparse Representation based Classification (SRC), Collaborative Representation based Classification (CRC), KCRC and Two Phase Test samples Sparse Representation (TPTSR).

A study on bandwith selection based on ASE for nonparametric density estimators

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.307-313
    • /
    • 2000
  • Suppose we have a set of data X1, ···, Xn and employ kernel density estimator to estimate the marginal density of X. in this article bandwith selection problem for kernel density estimator is examined closely. In particular the Kullback-Leibler method (a bandwith selection methods based on average square error (ASE)) is considered.

  • PDF

A Note on Deconvolution Estimators when Measurement Errors are Normal

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.517-526
    • /
    • 2012
  • In this paper a support vector method is proposed for use when the sample observations are contaminated by a normally distributed measurement error. The performance of deconvolution density estimators based on the support vector method is explored and compared with kernel density estimators by means of a simulation study. An interesting result was that for the estimation of kurtotic density, the support vector deconvolution estimator with a Gaussian kernel showed a better performance than the classical deconvolution kernel estimator.

A Support Vector Method for the Deconvolution Problem

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.451-457
    • /
    • 2010
  • This paper considers the problem of nonparametric deconvolution density estimation when sample observa-tions are contaminated by double exponentially distributed errors. Three different deconvolution density estima-tors are introduced: a weighted kernel density estimator, a kernel density estimator based on the support vector regression method in a RKHS, and a classical kernel density estimator. The performance of these deconvolution density estimators is compared by means of a simulation study.

Music/Voice Separation Based on Kernel Back-Fitting Using Weighted β-Order MMSE Estimation

  • Kim, Hyoung-Gook;Kim, Jin Young
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.510-517
    • /
    • 2016
  • Recent developments in the field of separation of mixed signals into music/voice components have attracted the attention of many researchers. Recently, iterative kernel back-fitting, also known as kernel additive modeling, was proposed to achieve good results for music/voice separation. To obtain minimum mean square error (MMSE) estimates of short-time Fourier transforms of sources, generalized spatial Wiener filtering (GW) is typically used. In this paper, we propose an advanced music/voice separation method that utilizes a generalized weighted ${\beta}$-order MMSE estimation (WbE) based on iterative kernel back-fitting (KBF). In the proposed method, WbE is used for the step of mixed music signal separation, while KBF permits kernel spectrogram model fitting at each iteration. Experimental results show that the proposed method achieves better separation performance than GW and existing Bayesian estimators.

Speaker Verification Using SVM Kernel with GMM-Supervector Based on the Mahalanobis Distance (Mahalanobis 거리측정 방법 기반의 GMM-Supervector SVM 커널을 이용한 화자인증 방법)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.216-221
    • /
    • 2010
  • In this paper, we propose speaker verification method using Support Vector Machine (SVM) kernel with Gaussian Mixture Model (GMM)-supervector based on the Mahalanobis distance. The proposed GMM-supervector SVM kernel method is combined GMM with SVM. The GMM-supervectors are generated by GMM parameters of speaker and other speaker utterances. A speaker verification threshold of GMM-supervectors is decided by SVM kernel based on Mahalanobis distance to improve speaker verification accuracy. The experimental results for text-independent speaker verification using 20 speakers demonstrates the performance of the proposed method compared to GMM, SVM, GMM-supervector SVM kernel based on Kullback-Leibler (KL) divergence, and GMM-supervector SVM kernel based on Bhattacharyya distance.

AN ELIGIBLE KERNEL BASED PRIMAL-DUAL INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION

  • Cho, Gyeong-Mi
    • Honam Mathematical Journal
    • /
    • v.35 no.2
    • /
    • pp.235-249
    • /
    • 2013
  • It is well known that each kernel function defines primal-dual interior-point method (IPM). Most of polynomial-time interior-point algorithms for linear optimization (LO) are based on the logarithmic kernel function ([9]). In this paper we define new eligible kernel function and propose a new search direction and proximity function based on this function for LO problems. We show that the new algorithm has $\mathcal{O}(({\log}\;p)^{\frac{5}{2}}\sqrt{n}{\log}\;n\;{\log}\frac{n}{\epsilon})$ and $\mathcal{O}(q^{\frac{3}{2}}({\log}\;p)^3\sqrt{n}{\log}\;\frac{n}{\epsilon})$ iteration complexity for large- and small-update methods, respectively. These are currently the best known complexity results for such methods.