• Title/Summary/Keyword: k-t SPARSE

Search Result 40, Processing Time 0.03 seconds

A Novel Multiple Kernel Sparse Representation based Classification for Face Recognition

  • Zheng, Hao;Ye, Qiaolin;Jin, Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.4
    • /
    • pp.1463-1480
    • /
    • 2014
  • It is well known that sparse code is effective for feature extraction of face recognition, especially sparse mode can be learned in the kernel space, and obtain better performance. Some recent algorithms made use of single kernel in the sparse mode, but this didn't make full use of the kernel information. The key issue is how to select the suitable kernel weights, and combine the selected kernels. In this paper, we propose a novel multiple kernel sparse representation based classification for face recognition (MKSRC), which performs sparse code and dictionary learning in the multiple kernel space. Initially, several possible kernels are combined and the sparse coefficient is computed, then the kernel weights can be obtained by the sparse coefficient. Finally convergence makes the kernel weights optimal. The experiments results show that our algorithm outperforms other state-of-the-art algorithms and demonstrate the promising performance of the proposed algorithms.

A Scalable Parallel Preconditioner on the CRAY-T3E for Large Nonsymmetric Spares Linear Systems (대형비대칭 이산행렬의 CRAY-T3E에서의 해법을 위한 확장가능한 병렬준비행렬)

  • Ma, Sang-Baek
    • The KIPS Transactions:PartA
    • /
    • v.8A no.3
    • /
    • pp.227-234
    • /
    • 2001
  • In this paper we propose a block-type parallel preconditioner for solving large sparse nonsymmetric linear systems, which we expect to be scalable. It is Multi-Color Block SOR preconditioner, combined with direct sparse matrix solver. For the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with Multi-Color ordering. Since most of the time is spent on the diagonal inversion, which is done on each processor, we expect it to be a good scalable preconditioner. We compared it with four other preconditioners, which are ILU(0)-wavefront ordering, ILU(0)-Multi-Color ordering, SPAI(SParse Approximate Inverse), and SSOR preconditiner. Experiments were conducted for the Finite Difference discretizations of two problems with various meshsizes varying up to $1025{\times}1024$. CRAY-T3E with 128 nodes was used. MPI library was used for interprocess communications, The results show that Multi-Color Block SOR is scalabl and gives the best performances.

  • PDF

Combing data representation by Sparse Autoencoder and the well-known load balancing algorithm, ProGReGA-KF (Sparse Autoencoder의 데이터 특징 추출과 ProGReGA-KF를 결합한 새로운 부하 분산 알고리즘)

  • Kim, Chayoung;Park, Jung-min;Kim, Hye-young
    • Journal of Korea Game Society
    • /
    • v.17 no.5
    • /
    • pp.103-112
    • /
    • 2017
  • In recent years, expansions and advances of the Internet of Things (IoTs) in a distributed MMOGs (massively multiplayer online games) architecture have resulted in massive growth of data in terms of server workloads. We propose a combing Sparse Autoencoder and one of platforms in MMOGs, ProGReGA. In the process of Sparse Autoencoder, data representation with respect to enhancing the feature is excluded from this set of data. In the process of load balance, the graceful degradation of ProGReGA can exploit the most relevant and less redundant feature of the data representation. We find out that the proposed algorithm have become more stable.

A Robust Preconditioner on the CRAY-T3E for Large Nonsymmetric Sparse Linear Systems

  • Ma, Sangback;Cho, Jaeyoung
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.5 no.1
    • /
    • pp.85-100
    • /
    • 2001
  • In this paper we propose a block-type parallel preconditioner for solving large sparse nonsymmetric linear systems, which we expect to be scalable. It is Multi-Color Block SOR preconditioner, combined with direct sparse matrix solver. For the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with Multi-Color ordering. Since most of the time is spent on the diagonal inversion, which is done on each processor, we expect it to be a good scalable preconditioner. Finally, due to the blocking effect, it will be effective for ill-conditioned problems. We compared it with four other preconditioners, which are ILU(0)-wavefront ordering, ILU(0)-Multi-Color ordering, SPAI(SParse Approximate Inverse), and SSOR preconditioner. Experiments were conducted for the Finite Difference discretizations of two problems with various meshsizes varying up to 1024 x 1024, and for an ill-conditioned matrix from the shell problem from the Harwell-Boeing collection. CRAY-T3E with 128 nodes was used. MPI library was used for interprocess communications. The results show that Multi-Color Block SOR and ILU(0) with Multi-Color ordering give the best performances for the finite difference matrices and for the shell problem only the Multi-Color Block SOR converges.

  • PDF

GOODNESS-OF-FIT TEST USING LOCAL MAXIMUM LIKELIHOOD POLYNOMIAL ESTIMATOR FOR SPARSE MULTINOMIAL DATA

  • Baek, Jang-Sun
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.3
    • /
    • pp.313-321
    • /
    • 2004
  • We consider the problem of testing cell probabilities in sparse multinomial data. Aerts et al. (2000) presented T=${{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2$ as a test statistic with the local least square polynomial estimator ${{p}_{i}}^{*}$, and derived its asymptotic distribution. The local least square estimator may produce negative estimates for cell probabilities. The local maximum likelihood polynomial estimator ${{\hat{p}}_{i}}$, however, guarantees positive estimates for cell probabilities and has the same asymptotic performance as the local least square estimator (Baek and Park, 2003). When there are cell probabilities with relatively much different sizes, the same contribution of the difference between the estimator and the hypothetical probability at each cell in their test statistic would not be proper to measure the total goodness-of-fit. We consider a Pearson type of goodness-of-fit test statistic, $T_1={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ instead, and show it follows an asymptotic normal distribution. Also we investigate the asymptotic normality of $T_2={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ where the minimum expected cell frequency is very small.

Sparse Signal Recovery via Tree Search Matching Pursuit

  • Lee, Jaeseok;Choi, Jun Won;Shim, Byonghyo
    • Journal of Communications and Networks
    • /
    • v.18 no.5
    • /
    • pp.699-712
    • /
    • 2016
  • Recently, greedy algorithm has received much attention as a cost-effective means to reconstruct the sparse signals from compressed measurements. Much of previous work has focused on the investigation of a single candidate to identify the support (index set of nonzero elements) of the sparse signals. Well-known drawback of the greedy approach is that the chosen candidate is often not the optimal solution due to the myopic decision in each iteration. In this paper, we propose a tree search based sparse signal recovery algorithm referred to as the tree search matching pursuit (TSMP). Two key ingredients of the proposed TSMP algorithm to control the computational complexity are the pre-selection to put a restriction on columns of the sensing matrix to be investigated and the tree pruning to eliminate unpromising paths from the search tree. In numerical simulations of Internet of Things (IoT) environments, it is shown that TSMP outperforms conventional schemes by a large margin.

TeT: Distributed Tera-Scale Tensor Generator (분산 테라스케일 텐서 생성기)

  • Jeon, ByungSoo;Lee, JungWoo;Kang, U
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.910-918
    • /
    • 2016
  • A tensor is a multi-dimensional array that represents many data such as (user, user, time) in the social network system. A tensor generator is an important tool for multi-dimensional data mining research with various applications including simulation, multi-dimensional data modeling/understanding, and sampling/extrapolation. However, existing tensor generators cannot generate sparse tensors like real-world tensors that obey power law. In addition, they have limitations such as tensor sizes that can be processed and additional time required to upload generated tensor to distributed systems for further analysis. In this study, we propose TeT, a distributed tera-scale tensor generator to solve these problems. TeT generates sparse random tensor as well as sparse R-MAT and Kronecker tensor without any limitation on tensor sizes. In addition, a TeT-generated tensor is immediately ready for further tensor analysis on the same distributed system. The careful design of TeT facilitates nearly linear scalability on the number of machines.

QUANTITATIVE WEIGHTED BOUNDS FOR THE VECTOR-VALUED SINGULAR INTEGRAL OPERATORS WITH NONSMOOTH KERNELS

  • Hu, Guoen
    • Bulletin of the Korean Mathematical Society
    • /
    • v.55 no.6
    • /
    • pp.1791-1809
    • /
    • 2018
  • Let T be the singular integral operator with nonsmooth kernel which was introduced by Duong and McIntosh, and $T_q(q{\in}(1,{\infty}))$ be the vector-valued operator defined by $T_qf(x)=({\sum}_{k=1}^{\infty}{\mid}T\;f_k(x){\mid}^q)^{1/q}$. In this paper, by proving certain weak type endpoint estimate of L log L type for the grand maximal operator of T, the author establishes some quantitative weighted bounds for $T_q$ and the corresponding vector-valued maximal singular integral operator.

A Nonparametric Goodness-of-Fit Test for Sparse Multinomial Data

  • Baek, Jang-Sun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.303-311
    • /
    • 2003
  • We consider the problem of testing cell probabilities in sparse multinomial data. Aerts, et al.(2000) presented $T_1=\sum\limits_{i=1}^k(\hat{p}_i-p_i)^2$ as a test statistic with the local polynomial estimator $(\hat{p}_i$, and showed its asymptotic distribution. When there are cell probabilities with relatively much different sizes, the same contribution of the difference between the estimator and the hypothetical probability at each cell in their test statistic would not be proper to measure the total goodness-of-fit. We consider a Pearson type of goodness-of-fit test statistic, $T=\sum\limits_{i=1}^k(\hat{p}_i-p_i)^2/p_i$ instead, and show it follows an asymptotic normal distribution.

  • PDF

A PRECONDITIONER FOR THE NORMAL EQUATIONS

  • Salkuyeh, Davod Khojasteh
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.3_4
    • /
    • pp.687-696
    • /
    • 2010
  • In this paper, an algorithm for computing the sparse approximate inverse factor of matrix $A^{T}\;A$, where A is an $m\;{\times}\;n$ matrix with $m\;{\geq}\;n$ and rank(A) = n, is proposed. The computation of the inverse factor are done without computing the matrix $A^{T}\;A$. The computed sparse approximate inverse factor is applied as a preconditioner for solving normal equations in conjunction with the CGNR algorithm. Some numerical experiments on test matrices are presented to show the efficiency of the method. A comparison with some available methods is also included.