• Title/Summary/Keyword: k-t SPARSE

검색결과 40건 처리시간 0.024초

A Novel Multiple Kernel Sparse Representation based Classification for Face Recognition

  • Zheng, Hao;Ye, Qiaolin;Jin, Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권4호
    • /
    • pp.1463-1480
    • /
    • 2014
  • It is well known that sparse code is effective for feature extraction of face recognition, especially sparse mode can be learned in the kernel space, and obtain better performance. Some recent algorithms made use of single kernel in the sparse mode, but this didn't make full use of the kernel information. The key issue is how to select the suitable kernel weights, and combine the selected kernels. In this paper, we propose a novel multiple kernel sparse representation based classification for face recognition (MKSRC), which performs sparse code and dictionary learning in the multiple kernel space. Initially, several possible kernels are combined and the sparse coefficient is computed, then the kernel weights can be obtained by the sparse coefficient. Finally convergence makes the kernel weights optimal. The experiments results show that our algorithm outperforms other state-of-the-art algorithms and demonstrate the promising performance of the proposed algorithms.

대형비대칭 이산행렬의 CRAY-T3E에서의 해법을 위한 확장가능한 병렬준비행렬 (A Scalable Parallel Preconditioner on the CRAY-T3E for Large Nonsymmetric Spares Linear Systems)

  • 마상백
    • 정보처리학회논문지A
    • /
    • 제8A권3호
    • /
    • pp.227-234
    • /
    • 2001
  • In this paper we propose a block-type parallel preconditioner for solving large sparse nonsymmetric linear systems, which we expect to be scalable. It is Multi-Color Block SOR preconditioner, combined with direct sparse matrix solver. For the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with Multi-Color ordering. Since most of the time is spent on the diagonal inversion, which is done on each processor, we expect it to be a good scalable preconditioner. We compared it with four other preconditioners, which are ILU(0)-wavefront ordering, ILU(0)-Multi-Color ordering, SPAI(SParse Approximate Inverse), and SSOR preconditiner. Experiments were conducted for the Finite Difference discretizations of two problems with various meshsizes varying up to $1025{\times}1024$. CRAY-T3E with 128 nodes was used. MPI library was used for interprocess communications, The results show that Multi-Color Block SOR is scalabl and gives the best performances.

  • PDF

Sparse Autoencoder의 데이터 특징 추출과 ProGReGA-KF를 결합한 새로운 부하 분산 알고리즘 (Combing data representation by Sparse Autoencoder and the well-known load balancing algorithm, ProGReGA-KF)

  • 김차영;박정민;김혜영
    • 한국게임학회 논문지
    • /
    • 제17권5호
    • /
    • pp.103-112
    • /
    • 2017
  • 많은 사용자가 함께 즐기는 온라인 게임(MMOGs)에서 IoT의 확장은 서버에 엄청난 부하를 지속적으로 증가시켜, 모든 데이터들이 Big-Data화 되어가는 환경에 있다. 이에 본 논문에서는 딥러닝 기법 중에서 가장 많이 사용되는 Sparse Autoencoder와 이미 잘 알려진 부하분산 알고리즘(ProGReGA-KF)을 결합한다. 기존 알고리즘 ProGReGA-KF과 본 논문에서 제안한 알고리즘을 이동 안정성으로 비교하였고, 제안한 알고리즘이 빅-데이터 환경에서 좀 더 안정적이고 확장성이 있음 시뮬레이션을 통해 보였다.

A Robust Preconditioner on the CRAY-T3E for Large Nonsymmetric Sparse Linear Systems

  • Ma, Sangback;Cho, Jaeyoung
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제5권1호
    • /
    • pp.85-100
    • /
    • 2001
  • In this paper we propose a block-type parallel preconditioner for solving large sparse nonsymmetric linear systems, which we expect to be scalable. It is Multi-Color Block SOR preconditioner, combined with direct sparse matrix solver. For the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with Multi-Color ordering. Since most of the time is spent on the diagonal inversion, which is done on each processor, we expect it to be a good scalable preconditioner. Finally, due to the blocking effect, it will be effective for ill-conditioned problems. We compared it with four other preconditioners, which are ILU(0)-wavefront ordering, ILU(0)-Multi-Color ordering, SPAI(SParse Approximate Inverse), and SSOR preconditioner. Experiments were conducted for the Finite Difference discretizations of two problems with various meshsizes varying up to 1024 x 1024, and for an ill-conditioned matrix from the shell problem from the Harwell-Boeing collection. CRAY-T3E with 128 nodes was used. MPI library was used for interprocess communications. The results show that Multi-Color Block SOR and ILU(0) with Multi-Color ordering give the best performances for the finite difference matrices and for the shell problem only the Multi-Color Block SOR converges.

  • PDF

GOODNESS-OF-FIT TEST USING LOCAL MAXIMUM LIKELIHOOD POLYNOMIAL ESTIMATOR FOR SPARSE MULTINOMIAL DATA

  • Baek, Jang-Sun
    • Journal of the Korean Statistical Society
    • /
    • 제33권3호
    • /
    • pp.313-321
    • /
    • 2004
  • We consider the problem of testing cell probabilities in sparse multinomial data. Aerts et al. (2000) presented T=${{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2$ as a test statistic with the local least square polynomial estimator ${{p}_{i}}^{*}$, and derived its asymptotic distribution. The local least square estimator may produce negative estimates for cell probabilities. The local maximum likelihood polynomial estimator ${{\hat{p}}_{i}}$, however, guarantees positive estimates for cell probabilities and has the same asymptotic performance as the local least square estimator (Baek and Park, 2003). When there are cell probabilities with relatively much different sizes, the same contribution of the difference between the estimator and the hypothetical probability at each cell in their test statistic would not be proper to measure the total goodness-of-fit. We consider a Pearson type of goodness-of-fit test statistic, $T_1={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ instead, and show it follows an asymptotic normal distribution. Also we investigate the asymptotic normality of $T_2={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ where the minimum expected cell frequency is very small.

Sparse Signal Recovery via Tree Search Matching Pursuit

  • Lee, Jaeseok;Choi, Jun Won;Shim, Byonghyo
    • Journal of Communications and Networks
    • /
    • 제18권5호
    • /
    • pp.699-712
    • /
    • 2016
  • Recently, greedy algorithm has received much attention as a cost-effective means to reconstruct the sparse signals from compressed measurements. Much of previous work has focused on the investigation of a single candidate to identify the support (index set of nonzero elements) of the sparse signals. Well-known drawback of the greedy approach is that the chosen candidate is often not the optimal solution due to the myopic decision in each iteration. In this paper, we propose a tree search based sparse signal recovery algorithm referred to as the tree search matching pursuit (TSMP). Two key ingredients of the proposed TSMP algorithm to control the computational complexity are the pre-selection to put a restriction on columns of the sensing matrix to be investigated and the tree pruning to eliminate unpromising paths from the search tree. In numerical simulations of Internet of Things (IoT) environments, it is shown that TSMP outperforms conventional schemes by a large margin.

분산 테라스케일 텐서 생성기 (TeT: Distributed Tera-Scale Tensor Generator)

  • 전병수;이정우;강유
    • 정보과학회 논문지
    • /
    • 제43권8호
    • /
    • pp.910-918
    • /
    • 2016
  • 많은 종류의 데이터들은 텐서로 표현될 수 있다. 텐서란 다차원 배열을 의미하며, 그 예로 (사용자, 사용자, 시간)으로 이루어진 소셜 네트워크 데이터가 있다. 이러한 다차원 데이터 분석에 있어서 텐서 생성기는 시뮬레이션, 다차원 데이터 모델링 및 이해, 샘플링/외삽법 등 다양한 응용이 가능하다. 하지만, 존재하는 텐서 생성기들은 실제 세계의 텐서처럼 멱 법칙을 따르는 특성과 희박성을 갖는 텐서를 생성할 수 없다. 또한, 처리가능한 텐서 크기에 한계가 존재하고, 분산시스템에서 추가 분석을 하려면 텐서를 분산시스템에 업로드 하는 추가비용이 든다. 본 논문은 분산 테라스케일 텐서 생성기(TeT)를 제안함으로써 이러한 문제를 해결하고자 한다. TeT는 희박성을 갖는 랜덤 텐서와 희박성과 멱 법칙을 따르는 특성을 갖는 Recursive-MATrix 텐서, 크로네커 텐서를 크기 제한없이 생성할 수 있다. 또한, TeT에서 생성된 텐서는 같은 분산 시스템에서 추가적인 텐서분석이 가능하다. TeT는 효율적인 설계로 인해 거의 선형적인 머신확장성을 보인다.

QUANTITATIVE WEIGHTED BOUNDS FOR THE VECTOR-VALUED SINGULAR INTEGRAL OPERATORS WITH NONSMOOTH KERNELS

  • Hu, Guoen
    • 대한수학회보
    • /
    • 제55권6호
    • /
    • pp.1791-1809
    • /
    • 2018
  • Let T be the singular integral operator with nonsmooth kernel which was introduced by Duong and McIntosh, and $T_q(q{\in}(1,{\infty}))$ be the vector-valued operator defined by $T_qf(x)=({\sum}_{k=1}^{\infty}{\mid}T\;f_k(x){\mid}^q)^{1/q}$. In this paper, by proving certain weak type endpoint estimate of L log L type for the grand maximal operator of T, the author establishes some quantitative weighted bounds for $T_q$ and the corresponding vector-valued maximal singular integral operator.

A Nonparametric Goodness-of-Fit Test for Sparse Multinomial Data

  • Baek, Jang-Sun
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권2호
    • /
    • pp.303-311
    • /
    • 2003
  • We consider the problem of testing cell probabilities in sparse multinomial data. Aerts, et al.(2000) presented $T_1=\sum\limits_{i=1}^k(\hat{p}_i-p_i)^2$ as a test statistic with the local polynomial estimator $(\hat{p}_i$, and showed its asymptotic distribution. When there are cell probabilities with relatively much different sizes, the same contribution of the difference between the estimator and the hypothetical probability at each cell in their test statistic would not be proper to measure the total goodness-of-fit. We consider a Pearson type of goodness-of-fit test statistic, $T=\sum\limits_{i=1}^k(\hat{p}_i-p_i)^2/p_i$ instead, and show it follows an asymptotic normal distribution.

  • PDF

A PRECONDITIONER FOR THE NORMAL EQUATIONS

  • Salkuyeh, Davod Khojasteh
    • Journal of applied mathematics & informatics
    • /
    • 제28권3_4호
    • /
    • pp.687-696
    • /
    • 2010
  • In this paper, an algorithm for computing the sparse approximate inverse factor of matrix $A^{T}\;A$, where A is an $m\;{\times}\;n$ matrix with $m\;{\geq}\;n$ and rank(A) = n, is proposed. The computation of the inverse factor are done without computing the matrix $A^{T}\;A$. The computed sparse approximate inverse factor is applied as a preconditioner for solving normal equations in conjunction with the CGNR algorithm. Some numerical experiments on test matrices are presented to show the efficiency of the method. A comparison with some available methods is also included.