• Title/Summary/Keyword: sparse covariance matrices

Search Result 3, Processing Time 0.016 seconds

A comparison study of Bayesian variable selection methods for sparse covariance matrices (희박 공분산 행렬에 대한 베이지안 변수 선택 방법론 비교 연구)

  • Kim, Bongsu;Lee, Kyoungjae
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.285-298
    • /
    • 2022
  • Continuous shrinkage priors, as well as spike and slab priors, have been widely employed for Bayesian inference about sparse regression coefficient vectors or covariance matrices. Continuous shrinkage priors provide computational advantages over spike and slab priors since their model space is substantially smaller. This is especially true in high-dimensional settings. However, variable selection based on continuous shrinkage priors is not straightforward because they do not give exactly zero values. Although few variable selection approaches based on continuous shrinkage priors have been proposed, no substantial comparative investigations of their performance have been conducted. In this paper, We compare two variable selection methods: a credible interval method and the sequential 2-means algorithm (Li and Pati, 2017). Various simulation scenarios are used to demonstrate the practical performances of the methods. We conclude the paper by presenting some observations and conjectures based on the simulation findings.

Off-grid direction-of-arrival estimation for wideband noncircular sources

  • Xiaoyu Zhang;Haihong Tao;Ziye, Fang;Jian Xie
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.492-504
    • /
    • 2023
  • Researchers have recently shown an increased interest in estimating the direction-of-arrival (DOA) of wideband noncircular sources, but existing studies have been restricted to subspace-based methods. An off-grid sparse recovery-based algorithm is proposed in this paper to improve the accuracy of existing algorithms in low signal-to-noise ratio situations. The covariance and pseudo covariance matrices can be jointly represented subject to block sparsity constraints by taking advantage of the joint sparsity between signal components and bias. Furthermore, the estimation problem is transformed into a single measurement vector problem utilizing the focused operation, resulting in a significant reduction in computational complexity. The proposed algorithm's error threshold and the Cramer-Rao bound for wideband noncircular DOA estimation are deduced in detail. The proposed algorithm's effectiveness and feasibility are demonstrated by simulation results.

A Hill-Sliding Strategy for Initialization of Gaussian Clusters in the Multidimensional Space

  • Park, J.Kyoungyoon;Chen, Yung-H.;Simons, Daryl-B.;Miller, Lee-D.
    • Korean Journal of Remote Sensing
    • /
    • v.1 no.1
    • /
    • pp.5-27
    • /
    • 1985
  • A hill-sliding technique was devised to extract Gaussian clusters from the multivariate probability density estimates of sample data for the first step of iterative unsupervised classification. The underlying assumption in this approach was that each cluster possessed a unimodal normal distribution. The key idea was that a clustering function proposed could distinguish elements of a cluster under formation from the rest in the feature space. Initial clusters were extracted one by one according to the hill-sliding tactics. A dimensionless cluster compactness parameter was proposed as a universal measure of cluster goodness and used satisfactorily in test runs with Landsat multispectral scanner (MSS) data. The normalized divergence, defined by the cluster divergence divided by the entropy of the entire sample data, was utilized as a general separability measure between clusters. An overall clustering objective function was set forth in terms of cluster covariance matrices, from which the cluster compactness measure could be deduced. Minimal improvement of initial data partitioning was evaluated by this objective function in eliminating scattered sparse data points. The hill-sliding clustering technique developed herein has the potential applicability to decomposition of any multivariate mixture distribution into a number of unimodal distributions when an appropriate diatribution function to the data set is employed.