• Title/Summary/Keyword: Kernel Space

Search Result 237, Processing Time 0.029 seconds

A Novel Multiple Kernel Sparse Representation based Classification for Face Recognition

  • Zheng, Hao;Ye, Qiaolin;Jin, Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권4호
    • /
    • pp.1463-1480
    • /
    • 2014
  • It is well known that sparse code is effective for feature extraction of face recognition, especially sparse mode can be learned in the kernel space, and obtain better performance. Some recent algorithms made use of single kernel in the sparse mode, but this didn't make full use of the kernel information. The key issue is how to select the suitable kernel weights, and combine the selected kernels. In this paper, we propose a novel multiple kernel sparse representation based classification for face recognition (MKSRC), which performs sparse code and dictionary learning in the multiple kernel space. Initially, several possible kernels are combined and the sparse coefficient is computed, then the kernel weights can be obtained by the sparse coefficient. Finally convergence makes the kernel weights optimal. The experiments results show that our algorithm outperforms other state-of-the-art algorithms and demonstrate the promising performance of the proposed algorithms.

효과적인 메모리 테스트를 위한 가상화 저널 (A Virtualized Kernel for Effective Memory Test)

  • 박희권;윤대석;최종무
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제34권12호
    • /
    • pp.618-629
    • /
    • 2007
  • 본 논문에서는 64비트 다중-코어 컴퓨팅 환경에서 효과적인 메모리 테스트를 위한 가상화 커널을 제안한다. 이때 효과적이라는 용어는 커널이 존재하는 메모리 공간을 포함한 모든 물리 메모리 공간에 대한 테스트를 시스템 리부팅 없이 수행할 수 있음을 의미한다. 이를 위해 가상화 커널은 4가지 기법을 제공한다. 첫째, 커널과 응용이 물리 메모리를 직접 접근 할 수 있게 하여 원하는 메모리 위치에 다양한 메모리 테스트 패턴을 쓰고 읽는 것이 가능하게 한다. 둘째, 두 개 이상의 커널 이미지가 다른 메모리 위치에서 수행 가능하도록 한다. 셋째, 커널이 사용하는 메모리 공간을 다른 커널로부터 격리한다. 넷째, 커널 하이버네이션을 이용하여 커널 간에 문맥 교환을 제공한다. 제안된 가상화 커널은 인텔사의 Xeon 시스템 상에서 리눅스 커널 2.6.18을 수정하여 구현되었다. 실험에 사용된 Xeon 시스템은 2개의 Dual-core CPU와 2GB 메모리를 탑재하고 있다. 실험 결과 설계된 가상화 커널이 메모리 테스트에 효과적으로 사용될 수 있음을 검증할 수 있었다.

COMPUTATION OF THE MATRIX OF THE TOEPLITZ OPERATOR ON THE HARDY SPACE

  • Chung, Young-Bok
    • 대한수학회논문집
    • /
    • 제34권4호
    • /
    • pp.1135-1143
    • /
    • 2019
  • The matrix representation of the Toeplitz operator on the Hardy space with respect to a generalized orthonormal basis for the space of square integrable functions associated to a bounded simply connected region in the complex plane is completely computed in terms of only the Szegő kernel and the Garabedian kernels.

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • 제15권2호
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

Bhattacharyya 커널을 적용한 Centroid Neural Network (Centroid Neural Network with Bhattacharyya Kernel)

  • 이송재;박동철
    • 한국통신학회논문지
    • /
    • 제32권9C호
    • /
    • pp.861-866
    • /
    • 2007
  • 본 논문은 가우시안 확률분포함수 (Gaussian Probability Distribution Function) 데이터 군집화를 위해 중심신경망 (Centroid Neural Network, CNN)에 Bhattacharyya 커널을 적용한 군집화 알고리즘 (Bhattacharyya Kernel based CNN, BK-CNN)을 제안한다. 제안된 BK-CNN은 무감독 알고리즘인 중심신경망을 기반으로 하고 있으며, 커널 방법을 이용하여 데이터를 특징공간에서 투영한다. 입력공간의 비선형 문제를 선형적으로 해결하기 위해 제안한 커널 방법인데, 확률분포 사이의 거리측정을 위해 Bhattacharyya 거리를 이용한 커널방법을 사용하였다. 제안된 BK-CNN을 영상데이터 분류의 문제에 적용했을 때, 제안된 BK-CNN 알고리즘이 Bhattacharyya 커널을 적용한 k-means, 자기조직지도(Self-Organizing Map)와 중심 신경망등의 기존 알고리즘보다 1.7% - 4.3%의 평균 분류정확도 향상을 가져옴을 확인할 수 있었다.

ORTHONORMAL BASIS FOR THE BERGMAN SPACE

  • Chung, Young-Bok;Na, Heui-Geong
    • 호남수학학술지
    • /
    • 제36권4호
    • /
    • pp.777-786
    • /
    • 2014
  • We construct an orthonormal basis for the Bergman space associated to a simply connected domain. We use the or-thonormal basis for the Hardy space consisting of the Szegő kernel and the Riemann mapping function and rewrite their area integrals in terms of arc length integrals using the complex Green's identity. And we make a note about the matrix of a Toeplitz operator with respect to the orthonormal basis constructed in the paper.

BERGMAN KERNEL ESTIMATES FOR GENERALIZED FOCK SPACES

  • Cho, Hong Rae;Park, Soohyun
    • East Asian mathematical journal
    • /
    • 제33권1호
    • /
    • pp.37-44
    • /
    • 2017
  • We will prove size estimates of the Bergman kernel for the generalized Fock space ${\mathcal{F}}^2_{\varphi}$, where ${\varphi}$ belongs to the class $\mathcal{W} $. The main tool for the proof is to use the estimate on the canonical solution to the ${\bar{\partial}}$-equation. We use Delin's weighted $L^2$-estimate ([3], [6]) for it.

A note on SVM estimators in RKHS for the deconvolution problem

  • Lee, Sungho
    • Communications for Statistical Applications and Methods
    • /
    • 제23권1호
    • /
    • pp.71-83
    • /
    • 2016
  • In this paper we discuss a deconvolution density estimator obtained using the support vector machines (SVM) and Tikhonov's regularization method solving ill-posed problems in reproducing kernel Hilbert space (RKHS). A remarkable property of SVM is that the SVM leads to sparse solutions, but the support vector deconvolution density estimator does not preserve sparsity as well as we expected. Thus, in section 3, we propose another support vector deconvolution estimator (method II) which leads to a very sparse solution. The performance of the deconvolution density estimators based on the support vector method is compared with the classical kernel deconvolution density estimator for important cases of Gaussian and Laplacian measurement error by means of a simulation study. In the case of Gaussian error, the proposed support vector deconvolution estimator shows the same performance as the classical kernel deconvolution density estimator.

The Kernel Trick for Content-Based Media Retrieval in Online Social Networks

  • Cha, Guang-Ho
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1020-1033
    • /
    • 2021
  • Nowadays, online or mobile social network services (SNS) are very popular and widely spread in our society and daily lives to instantly share, disseminate, and search information. In particular, SNS such as YouTube, Flickr, Facebook, and Amazon allow users to upload billions of images or videos and also provide a number of multimedia information to users. Information retrieval in multimedia-rich SNS is very useful but challenging task. Content-based media retrieval (CBMR) is the process of obtaining the relevant image or video objects for a given query from a collection of information sources. However, CBMR suffers from the dimensionality curse due to inherent high dimensionality features of media data. This paper investigates the effectiveness of the kernel trick in CBMR, specifically, the kernel principal component analysis (KPCA) for dimensionality reduction. KPCA is a nonlinear extension of linear principal component analysis (LPCA) to discovering nonlinear embeddings using the kernel trick. The fundamental idea of KPCA is mapping the input data into a highdimensional feature space through a nonlinear kernel function and then computing the principal components on that mapped space. This paper investigates the potential of KPCA in CBMR for feature extraction or dimensionality reduction. Using the Gaussian kernel in our experiments, we compute the principal components of an image dataset in the transformed space and then we use them as new feature dimensions for the image dataset. Moreover, KPCA can be applied to other many domains including CBMR, where LPCA has been used to extract features and where the nonlinear extension would be effective. Our results from extensive experiments demonstrate that the potential of KPCA is very encouraging compared with LPCA in CBMR.

Multi-Radial Basis Function SVM Classifier: Design and Analysis

  • Wang, Zheng;Yang, Cheng;Oh, Sung-Kwun;Fu, Zunwei
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권6호
    • /
    • pp.2511-2520
    • /
    • 2018
  • In this study, Multi-Radial Basis Function Support Vector Machine (Multi-RBF SVM) classifier is introduced based on a composite kernel function. In the proposed multi-RBF support vector machine classifier, the input space is divided into several local subsets considered for extremely nonlinear classification tasks. Each local subset is expressed as nonlinear classification subspace and mapped into feature space by using kernel function. The composite kernel function employs the dual RBF structure. By capturing the nonlinear distribution knowledge of local subsets, the training data is mapped into higher feature space, then Multi-SVM classifier is realized by using the composite kernel function through optimization procedure similar to conventional SVM classifier. The original training data set is partitioned by using some unsupervised learning methods such as clustering methods. In this study, three types of clustering method are considered such as Affinity propagation (AP), Hard C-Mean (HCM) and Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA). Experimental results on benchmark machine learning datasets show that the proposed method improves the classification performance efficiently.