• Title/Summary/Keyword: sparse prior

Search Result 39, Processing Time 0.021 seconds

Visual Object Manipulation Based on Exploration Guided by Demonstration (시연에 의해 유도된 탐험을 통한 시각 기반의 물체 조작)

  • Kim, Doo-Jun;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.40-47
    • /
    • 2022
  • A reward function suitable for a task is required to manipulate objects through reinforcement learning. However, it is difficult to design the reward function if the ample information of the objects cannot be obtained. In this study, a demonstration-based object manipulation algorithm called stochastic exploration guided by demonstration (SEGD) is proposed to solve the design problem of the reward function. SEGD is a reinforcement learning algorithm in which a sparse reward explorer (SRE) and an interpolated policy using demonstration (IPD) are added to soft actor-critic (SAC). SRE ensures the training of the critic of SAC by collecting prior data and IPD limits the exploration space by making SEGD's action similar to the expert's action. Through these two algorithms, the SEGD can learn only with the sparse reward of the task without designing the reward function. In order to verify the SEGD, experiments were conducted for three tasks. SEGD showed its effectiveness by showing success rates of more than 96.5% in these experiments.

Comparing MCMC algorithms for the horseshoe prior (Horseshoe 사전분포에 대한 MCMC 알고리듬 비교 연구)

  • Miru Ma;Mingi Kang;Kyoungjae Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.103-118
    • /
    • 2024
  • The horseshoe prior is notably one of the most popular priors in sparse regression models, where only a small fraction of coefficients are nonzero. The parameter space of the horseshoe prior is much smaller than that of the spike and slab prior, so it enables us to efficiently explore the parameter space even in high-dimensions. However, on the other hand, the horseshoe prior has a high computational cost for each iteration in the Gibbs sampler. To overcome this issue, various MCMC algorithms for the horseshoe prior have been proposed to reduce the computational burden. Especially, Johndrow et al. (2020) recently proposes an approximate algorithm that can significantly improve the mixing and speed of the MCMC algorithm. In this paper, we compare (1) the traditional MCMC algorithm, (2) the approximate MCMC algorithm proposed by Johndrow et al. (2020) and (3) its variant in terms of computing times, estimation and variable selection performance. For the variable selection, we adopt the sequential clustering-based method suggested by Li and Pati (2017). Practical performances of the MCMC methods are demonstrated via numerical studies.

Bayesian Image Denoising with Mixed Prior Using Hypothesis-Testing Problem (가설-검증 문제를 이용한 혼합 프라이어를 가지는 베이지안 영상 잡음 제거)

  • Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.34-42
    • /
    • 2006
  • In general, almost information is stored in only a few wavelet coefficients. This sparse characteristic of wavelet coefficient can be modeled by the mixture of Gaussian probability density function and point mass at zero, and denoising for this prior model is peformed by using Bayesian estimation. In this paper, we propose a method of parameter estimation for denoising using hypothesis-testing problem. Hypothesis-testing problem is applied to variance of wavelet coefficient, and $X^2$-test is used. Simulation results show our method outperforms about 0.3dB higher PSNR(peak signal-to-noise ratio) gains compared to the states-of-art denoising methods when using orthogonal wavelets.

Double 𝑙1 regularization for moving force identification using response spectrum-based weighted dictionary

  • Yuandong Lei;Bohao Xu;Ling Yu
    • Structural Engineering and Mechanics
    • /
    • v.91 no.2
    • /
    • pp.227-238
    • /
    • 2024
  • Sparse regularization methods have proven effective in addressing the ill-posed equations encountered in moving force identification (MFI). However, the complexity of vehicle loads is often ignored in existing studies aiming at enhancing MFI accuracy. To tackle this issue, a double 𝑙1 regularization method is proposed for MFI based on a response spectrum-based weighted dictionary in this study. Firstly, the relationship between vehicle-induced responses and moving vehicle loads (MVL) is established. The structural responses are then expanded in the frequency domain to obtain the prior knowledge related to MVL and to further construct a response spectrum-based weighted dictionary for MFI with a higher accuracy. Secondly, with the utilization of this weighted dictionary, a double 𝑙1 regularization framework is presented for identifying the static and dynamic components of MVL by the alternating direction method of multipliers (ADMM) method successively. To assess the performance of the proposed method, two different types of MVL, such as composed of trigonometric functions and driven from a 1/4 bridge-vehicle model, are adopted to conduct numerical simulations. Furthermore, a series of MFI experimental verifications are carried out in laboratory. The results shows that the proposed method's higher accuracy and strong robustness to noises compared with other traditional regularization methods.

A comparison study of Bayesian variable selection methods for sparse covariance matrices (희박 공분산 행렬에 대한 베이지안 변수 선택 방법론 비교 연구)

  • Kim, Bongsu;Lee, Kyoungjae
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.285-298
    • /
    • 2022
  • Continuous shrinkage priors, as well as spike and slab priors, have been widely employed for Bayesian inference about sparse regression coefficient vectors or covariance matrices. Continuous shrinkage priors provide computational advantages over spike and slab priors since their model space is substantially smaller. This is especially true in high-dimensional settings. However, variable selection based on continuous shrinkage priors is not straightforward because they do not give exactly zero values. Although few variable selection approaches based on continuous shrinkage priors have been proposed, no substantial comparative investigations of their performance have been conducted. In this paper, We compare two variable selection methods: a credible interval method and the sequential 2-means algorithm (Li and Pati, 2017). Various simulation scenarios are used to demonstrate the practical performances of the methods. We conclude the paper by presenting some observations and conjectures based on the simulation findings.

Low-Rank Representation-Based Image Super-Resolution Reconstruction with Edge-Preserving

  • Gao, Rui;Cheng, Deqiang;Yao, Jie;Chen, Liangliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3745-3761
    • /
    • 2020
  • Low-rank representation methods already achieve many applications in the image reconstruction. However, for high-gradient image patches with rich texture details and strong edge information, it is difficult to find sufficient similar patches. Existing low-rank representation methods usually destroy image critical details and fail to preserve edge structure. In order to promote the performance, a new representation-based image super-resolution reconstruction method is proposed, which combines gradient domain guided image filter with the structure-constrained low-rank representation so as to enhance image details as well as reveal the intrinsic structure of an input image. Firstly, we extract the gradient domain guided filter of each atom in high resolution dictionary in order to acquire high-frequency prior information. Secondly, this prior information is taken as a structure constraint and introduced into the low-rank representation framework to develop a new model so as to maintain the edges of reconstructed image. Thirdly, the approximate optimal solution of the model is solved through alternating direction method of multipliers. After that, experiments are performed and results show that the proposed algorithm has higher performances than conventional state-of-the-art algorithms in both quantitative and qualitative aspects.

Dual Dictionary Learning for Cell Segmentation in Bright-field Microscopy Images (명시야 현미경 영상에서의 세포 분할을 위한 이중 사전 학습 기법)

  • Lee, Gyuhyun;Quan, Tran Minh;Jeong, Won-Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.21-29
    • /
    • 2016
  • Cell segmentation is an important but time-consuming and laborious task in biological image analysis. An automated, robust, and fast method is required to overcome such burdensome processes. These needs are, however, challenging due to various cell shapes, intensity, and incomplete boundaries. A precise cell segmentation will allow to making a pathological diagnosis of tissue samples. A vast body of literature exists on cell segmentation in microscopy images [1]. The majority of existing work is based on input images and predefined feature models only - for example, using a deformable model to extract edge boundaries in the image. Only a handful of recent methods employ data-driven approaches, such as supervised learning. In this paper, we propose a novel data-driven cell segmentation algorithm for bright-field microscopy images. The proposed method minimizes an energy formula defined by two dictionaries - one is for input images and the other is for their manual segmentation results - and a common sparse code, which aims to find the pixel-level classification by deploying the learned dictionaries on new images. In contrast to deformable models, we do not need to know a prior knowledge of objects. We also employed convolutional sparse coding and Alternating Direction of Multiplier Method (ADMM) for fast dictionary learning and energy minimization. Unlike an existing method [1], our method trains both dictionaries concurrently, and is implemented using the GPU device for faster performance.

Bayesian Test of Quasi-Independence in a Sparse Two-Way Contingency Table

  • Kwak, Sang-Gyu;Kim, Dal-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.495-500
    • /
    • 2012
  • We consider a Bayesian test of independence in a two-way contingency table that has some zero cells. To do this, we take a three-stage hierarchical Bayesian model under each hypothesis. For prior, we use Dirichlet density to model the marginal cell and each cell probabilities. Our method does not require complicated computation such as a Metropolis-Hastings algorithm to draw samples from each posterior density of parameters. We draw samples using a Gibbs sampler with a grid method. For complicated posterior formulas, we apply the Monte-Carlo integration and the sampling important resampling algorithm. We compare the values of the Bayes factor with the results of a chi-square test and the likelihood ratio test.

Pullout capacity of small ground anchors: a relevance vector machine approach

  • Samui, Pijush;Sitharam, T.G.
    • Geomechanics and Engineering
    • /
    • v.1 no.3
    • /
    • pp.259-262
    • /
    • 2009
  • This paper examines the potential of relevance vector machine (RVM) in prediction of pullout capacity of small ground anchors. RVM is based on a Bayesian formulation of a linear model with an appropriate prior that results in a sparse representation. The results are compared with a widely used artificial neural network (ANN) model. Overall, the RVM showed good performance and is proven to be better than ANN model. It also estimates the prediction variance. The plausibility of RVM technique is shown by its superior performance in forecasting pullout capacity of small ground anchors providing exogenous knowledge.

A method of X-ray source spectrum estimation from transmission measurements based on compressed sensing

  • Liu, Bin;Yang, Hongrun;Lv, Huanwen;Li, Lan;Gao, Xilong;Zhu, Jianping;Jing, Futing
    • Nuclear Engineering and Technology
    • /
    • v.52 no.7
    • /
    • pp.1495-1502
    • /
    • 2020
  • A new method of X-ray source spectrum estimation based on compressed sensing is proposed in this paper. The algorithm K-SVD is applied for sparse representation. Nonnegative constraints are added by modifying the L1 reconstruction algorithm proposed by Rosset and Zhu. The estimation method is demonstrated on simulated spectra typical of mammography and CT. X-ray spectra are simulated with the Monte Carlo code Geant4. The proposed method is successfully applied to highly ill conditioned and under determined estimation problems with a good performance of suppressing noises. Results with acceptable accuracies (MSE < 5%) can be obtained with 10% Gaussian white noises added to the simulated experimental data. The biggest difference between the proposed method and the existing methods is that multiple prior knowledge of X-ray spectra can be included in one dictionary, which is meaningful for obtaining the true X-ray spectrum from the measurements.