• Title/Summary/Keyword: Classical Statistical Method

검색결과 109건 처리시간 0.02초

A note on SVM estimators in RKHS for the deconvolution problem

  • Lee, Sungho
    • Communications for Statistical Applications and Methods
    • /
    • 제23권1호
    • /
    • pp.71-83
    • /
    • 2016
  • In this paper we discuss a deconvolution density estimator obtained using the support vector machines (SVM) and Tikhonov's regularization method solving ill-posed problems in reproducing kernel Hilbert space (RKHS). A remarkable property of SVM is that the SVM leads to sparse solutions, but the support vector deconvolution density estimator does not preserve sparsity as well as we expected. Thus, in section 3, we propose another support vector deconvolution estimator (method II) which leads to a very sparse solution. The performance of the deconvolution density estimators based on the support vector method is compared with the classical kernel deconvolution density estimator for important cases of Gaussian and Laplacian measurement error by means of a simulation study. In the case of Gaussian error, the proposed support vector deconvolution estimator shows the same performance as the classical kernel deconvolution density estimator.

A Note on Deconvolution Estimators when Measurement Errors are Normal

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • 제19권4호
    • /
    • pp.517-526
    • /
    • 2012
  • In this paper a support vector method is proposed for use when the sample observations are contaminated by a normally distributed measurement error. The performance of deconvolution density estimators based on the support vector method is explored and compared with kernel density estimators by means of a simulation study. An interesting result was that for the estimation of kurtotic density, the support vector deconvolution estimator with a Gaussian kernel showed a better performance than the classical deconvolution kernel estimator.

A Support Vector Method for the Deconvolution Problem

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • 제17권3호
    • /
    • pp.451-457
    • /
    • 2010
  • This paper considers the problem of nonparametric deconvolution density estimation when sample observa-tions are contaminated by double exponentially distributed errors. Three different deconvolution density estima-tors are introduced: a weighted kernel density estimator, a kernel density estimator based on the support vector regression method in a RKHS, and a classical kernel density estimator. The performance of these deconvolution density estimators is compared by means of a simulation study.

Confounded Row-Column Designs

  • Choi Kuey Chung;Gupta Sudhir
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 한국통계학회 2004년도 학술발표논문집
    • /
    • pp.313-317
    • /
    • 2004
  • Confounded row-column designs for factorial experiments are studied in this paper. The Designs, thus, have factorial balance with respect to estimable main effects and interactions. John and Lewis (1983) considered generalized cycle row=column designs for factorial experiments. A simple method of constructing confounded designs using the classical method of confounding for block designs is described in this paper

  • PDF

A study On An Identification of Interactions In A Nonreplicated Two-Way Layout With $L_1$-Estimation

  • Lee, Ki-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • 제7권1호
    • /
    • pp.119-128
    • /
    • 2000
  • This paper proposes a method for detecting interactions in a two-way layout with one observation per cell. The identification of interactions in the model is not clear for they are confounding with error terms. The $L_1$-Estimation is robust with respect to a y-direction outlier in linear model so we are able to estimate main effects without affection of interactions, If an observation is classified as an outlier we conclude it contains an interaction. An empirical study compared with a classical method is performed.

  • PDF

A Study of the Small Sample Warranty Data Analysis Using the Bayesian Approach (베이지안 기법을 이용한 소표본 보증데이터 분석 방법 연구)

  • Kim, Jong-Gurl;Sung, Ki-Woo;Song, Jung-Moo
    • Proceedings of the Safety Management and Science Conference
    • /
    • 대한안전경영과학회 2013년 춘계학술대회
    • /
    • pp.517-531
    • /
    • 2013
  • 보증 데이터를 통해 제품의 수명 및 형상모수를 추정할 때 최우추정법과 같은 전통적인 통계 분석방법(Classical Statistical Method)을 많이 사용하였다. 그러나 전통적인 통계 분석방법을 통해 수명과 형상모수의 추정 시 표본의 크기가 작거나 불완전한 경우 추정량의 신뢰성이 떨어진다는 단점이 있고 또 누적된 경험과 과거자료를 충분히 이용하지 못하는 단점도 있다. 이러한 문제점을 해결하기 위해 모수의 사전분포를 가정하는 베이지안(Bayesian) 기법의 적용이 필요하다. 하지만 보증 데이터분석에 있어서 베이지안 기법을 이용한 연구는 아직 미흡한 실정이다. 본 연구에서는 수명분포가 와이블 분포를 갖는 보증데이터를 활용하여 모수 추정의 효율성을 비교 분석하고자 한다. 이를 위해 와이블 분포의 모수가 대수정규분포를 따르는 사전분포를 갖는 베이지안 기법과 전통적 통계기법인 생명표법(Actuarial method)을 활용하여 추정량을 도출하고 비교 분석하였다. 이를 통해 충분한 관측 데이터를 확보할 수 없는 경우에 베이지안 기법을 이용한 보증 데이터 분석방법의 성능을 확인하고자 한다.

  • PDF

Probability distribution and statistical moments of the maximum wind velocity

  • Schettini, Evelia;Solari, Giovanni
    • Wind and Structures
    • /
    • 제1권4호
    • /
    • pp.287-302
    • /
    • 1998
  • This paper formulates a probabilistic model which is able to represent the maximum instantaneous wind velocity. Unlike the classical methods, where the randomness is circumscribed within the mean maximum component, this model relies also on the randomness of the maximum value of the turbulent fluctuation. The application of the FOSM method furnishes the first and second statistical moments in closed form. The comparison between the results herein obtained and those supplied by classical methods points out the central role of the turbulence intensity. Its importance is exalted when extending the analysis from the wind velocity to the wind pressure.

A Hilbert-Huang Transform Approach Combined with PCA for Predicting a Time Series

  • Park, Min-Jeong
    • The Korean Journal of Applied Statistics
    • /
    • 제24권6호
    • /
    • pp.995-1006
    • /
    • 2011
  • A time series can be decomposed into simple components with a multiscale method. Empirical mode decomposition(EMD) is a recently invented multiscale method in Huang et al. (1998). It is natural to apply a classical prediction method such a vector autoregressive(AR) model to the obtained simple components instead of the original time series; in addition, a prediction procedure combining a classical prediction model to EMD and Hilbert spectrum is proposed in Kim et al. (2008). In this paper, we suggest to adopt principal component analysis(PCA) to the prediction procedure that enables the efficient selection of input variables among obtained components by EMD. We discuss the utility of adopting PCA in the prediction procedure based on EMD and Hilbert spectrum and analyze the daily worm account data by the proposed PCA adopted prediction method.

Evaluation of the classification method using ancestry SNP markers for ethnic group

  • Lee, Hyo Jung;Hong, Sun Pyo;Lee, Soong Deok;Rhee, Hwan seok;Lee, Ji Hyun;Jeong, Su Jin;Lee, Jae Won
    • Communications for Statistical Applications and Methods
    • /
    • 제26권1호
    • /
    • pp.1-9
    • /
    • 2019
  • Various probabilistic methods have been proposed for using interpopulation allele frequency differences to infer the ethnic group of a DNA specimen. The selection of the statistical method is critical because the accuracy of the statistical classification results vary. For the ancestry classification, we proposed a new ancestry evaluation method that estimate the combined ethnicity index as well as compared its performance with various classical classification methods using two real data sets. We selected 13 SNPs that are useful for the inference of ethnic origin. These single nucleotide polymorphisms (SNPs) were analyzed by restriction fragment mass polymorphism assay and followed by classification among ethnic groups. We genotyped 400 individuals from four ethnic groups (100 African-American, 100 Caucasian, 100 Korean, and 100 Mexican-American) for 13 SNPs and allele frequencies that differed among the four ethnic groups. Additionally, we applied our new method to HapMap SNP genotypes for 1,011 samples from 4 populations (African, European, East Asian, and Central-South Asian). Our proposed method yielded the highest accuracy among statistical classification methods. Our ethnic group classification system based on the analysis of ancestry informative SNP markers can provide a useful statistical tool to identify ethnic groups.

Noise-free Distributions Comparison of Bayesian Wavelet Threshold for Image Denoise

  • Choi, Ilsu;Rhee, Sung-Suk;Ahn, Yunkee
    • Communications for Statistical Applications and Methods
    • /
    • 제8권2호
    • /
    • pp.573-579
    • /
    • 2001
  • Wavelet thresholding is a method for he reduction of noise in image. Wavelet coefficients of image are correlated in local characterization. Thee correlations also appear in he original pixel representation of the image, and they do not follow from the characterizations of the wavelet transform. In this paper, we compare noise-free distributions of Bayes approach to improve the classical threshold algorithm.

  • PDF