• Title/Summary/Keyword: kernel estimation

Search Result 295, Processing Time 0.028 seconds

Smoothing Parameter Selection in Nonparametric Spectral Density Estimation

  • Kang, Kee-Hoon;Park, Byeong-U;Cho, Sin-Sup;Kim, Woo-Chul
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.231-242
    • /
    • 1995
  • In this paper we consider kernel type estimator of the spectral density at a point in the analysis of stationary time series data. The kernel entails choice of smoothing parameter called bandwidth. A data-based bandwidth choice is proposed, and it is obtained by solving an equation similar to Sheather(1986) which relates to the probability density estimation. A Monte Carlo study is done. It reveals that the spectral density estimates using the data-based bandwidths show comparatively good performance.

  • PDF

A Nonparametric Approach for Noisy Point Data Preprocessing

  • Xi, Yongjian;Duan, Ye;Zhao, Hongkai
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • 3D point data acquired from laser scan or stereo vision can be quite noisy. A preprocessing step is often needed before a surface reconstruction algorithm can be applied. In this paper, we propose a nonparametric approach for noisy point data preprocessing. In particular, we proposed an anisotropic kernel based nonparametric density estimation method for outlier removal, and a hill-climbing line search approach for projecting data points onto the real surface boundary. Our approach is simple, robust and efficient. We demonstrate our method on both real and synthetic point datasets.

A Case Study of an Activity Based Mathematical Education: A Kernel Density Estimation to Solve a Dilemma for a Missile Simulation

  • Kim, G. Daniel
    • Communications of Mathematical Education
    • /
    • v.16
    • /
    • pp.139-147
    • /
    • 2003
  • While the statistical concept 'order statistics' has a great number of applications in our society ranging from industry to military analysis, it is not necessarily an easy concept to understand for many people. Adding some interesting simulation activities of this concept to the probability or statistics curriculum, however, can enhance the learning curve greatly. A hands-on and a graphic calculator based activities of a missile simulation were introduced by Kim(2003) in the context of order statistics. This article revisits the two activities in his paper and point out a dilemma that occurs from the violation of an assumption on two deviation parameters associated with the missile simulation. A third activity is introduced to resolve the dilemma in the terms of a kernel density estimation which is a nonparametric approach.

  • PDF

Nonparametric kernel calibration and interval estimation (비모수적 커널교정과 구간추정)

  • 이재창;전명식;김대학
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.2
    • /
    • pp.227-235
    • /
    • 1993
  • Calibration relates the estimation of independent variable which rquires more effort or expense than dependent variable does. It would be provided with high accuracy because a little change of the result of independent variable cn cause a serious effect to the human being. Usual statistical analysis assumes the normality of error distribution or linearity of data. It is desirable to analyze the data without those assumptions for the accuracy of the calibration. In this paper, we calibrated the data nonparametrically without those assumptions and derived confidence interval estimate for the independent variable. As a method, we used kernel method which is popular in modern statistical branch. We derived bootstrap confidence interval estimate from the bootstrap confidence band.

  • PDF

A Windowed-Total-Variation Regularization Constraint Model for Blind Image Restoration

  • Liu, Ganghua;Tian, Wei;Luo, Yushun;Zou, Juncheng;Tang, Shu
    • Journal of Information Processing Systems
    • /
    • v.18 no.1
    • /
    • pp.48-58
    • /
    • 2022
  • Blind restoration for motion-blurred images is always the research hotspot, and the key for the blind restoration is the accurate blur kernel (BK) estimation. Therefore, to achieve high-quality blind image restoration, this thesis presents a novel windowed-total-variation method. The proposed method is based on the spatial scale of edges but not amplitude, and the proposed method thus can extract useful image edges for accurate BK estimation, and then recover high-quality clear images. A large number of experiments prove the superiority.

Nonparametric Estimation of Discontinuous Variance Function in Regression Model

  • Kang, Kee-Hoon;Huh, Jib
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.11a
    • /
    • pp.103-108
    • /
    • 2002
  • We consider an estimation of discontinuous variance function in nonparametric heteroscedastic random design regression model. We first propose estimators of a change point and jump size in variance function and then construct an estimator of entire variance function. We examine the rates of convergence of these estimators and give results on their asymptotics. Numerical work reveals that the effectiveness of change point analysis in variance function estimation is quite significant.

  • PDF

A Selection of High Pedestrian Accident Zones Using Traffic Accident Data and GIS: A Case Study of Seoul (교통사고 데이터와 GIS를 이용한 보행자사고 개선구역 선정 : 서울시를 대상으로)

  • Yang, Jong Hyeon;Kim, Jung Ok;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.3
    • /
    • pp.221-230
    • /
    • 2016
  • To establish objective criteria for high pedestrian accident zones, we combined Getis-ord Gi* and Kernel Density Estimation to select high pedestrian accident zones for 54,208 pedestrian accidents in Seoul from 2009 to 2013. By applying Getis-ord Gi* and considering spatial patterns where pedestrian accident hot spots were clustered, this study identified high pedestrian accident zones. The research examined the microscopic distribution of accidents in high pedestrian accident zones, identified the critical hot spots through Kernel Density Estimation, and analyzed the inner distribution of hot spots by identifying the areas with high density levels.

Initialization of Fuzzy C-Means Using Kernel Density Estimation (커널 밀도 추정을 이용한 Fuzzy C-Means의 초기화)

  • Heo, Gyeong-Yong;Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1659-1664
    • /
    • 2011
  • Fuzzy C-Means (FCM) is one of the most widely used clustering algorithms and has been used in many applications successfully. However, FCM has some shortcomings and initial prototype selection is one of them. As FCM is only guaranteed to converge on a local optimum, different initial prototype results in different clustering. Therefore, much care should be given to the selection of initial prototype. In this paper, a new initialization method for FCM using kernel density estimation (KDE) is proposed to resolve the initialization problem. KDE can be used to estimate non-parametric data distribution and is useful in estimating local density. After KDE, in the proposed method, one initial point is placed at the most dense region and the density of that region is reduced. By iterating the process, initial prototype can be obtained. The initial prototype such obtained showed better result than the randomly selected one commonly used in FCM, which was demonstrated by experimental results.

Bandwidth selections based on cross-validation for estimation of a discontinuity point in density (교차타당성을 이용한 확률밀도함수의 불연속점 추정의 띠폭 선택)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.765-775
    • /
    • 2012
  • The cross-validation is a popular method to select bandwidth in all types of kernel estimation. The maximum likelihood cross-validation, the least squares cross-validation and biased cross-validation have been proposed for bandwidth selection in kernel density estimation. In the case that the probability density function has a discontinuity point, Huh (2012) proposed a method of bandwidth selection using the maximum likelihood cross-validation. In this paper, two forms of cross-validation with the one-sided kernel function are proposed for bandwidth selection to estimate the location and jump size of the discontinuity point of density. These methods are motivated by the least squares cross-validation and the biased cross-validation. By simulated examples, the finite sample performances of two proposed methods with the one of Huh (2012) are compared.

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.