• Title/Summary/Keyword: adaptive kernel-density estimation

Search Result 9, Processing Time 0.024 seconds

Adaptive Kernel Density Estimation

  • Faraway, Julian.;Jhun, Myoungshic
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.1
    • /
    • pp.99-111
    • /
    • 1995
  • It is shown that the adaptive kernel methods can potentially produce superior density estimates to the fixed one. In using the adaptive estimates, problems pertain to the initial choice of the estimate can be solved by iteration. Also, simultaneous recommended for variety of distributions. Some data-based method for the choice of the parameters are suggested based on simulation study.

  • PDF

Historical Study on Density Smoothing in Nonparametric Statistics (비모수 통계학에서 밀도 추정의 평활에 관한 역사적 고찰)

  • 이승우
    • Journal for History of Mathematics
    • /
    • v.17 no.2
    • /
    • pp.15-20
    • /
    • 2004
  • We investigate the unbiasedness and consistency as the statistical properties of density estimators. We show histogram, kernel density estimation, and local adaptive smoothing as density smoothing in this paper. Also, the early and recent research on nonparametric density estimation is described and discussed.

  • PDF

A novel reliability analysis method based on Gaussian process classification for structures with discontinuous response

  • Zhang, Yibo;Sun, Zhili;Yan, Yutao;Yu, Zhenliang;Wang, Jian
    • Structural Engineering and Mechanics
    • /
    • v.75 no.6
    • /
    • pp.771-784
    • /
    • 2020
  • Reliability analysis techniques combining with various surrogate models have attracted increasing attention because of their accuracy and great efficiency. However, they primarily focus on the structures with continuous response, while very rare researches on the reliability analysis for structures with discontinuous response are carried out. Furthermore, existing adaptive reliability analysis methods based on importance sampling (IS) still have some intractable defects when dealing with small failure probability, and there is no related research on reliability analysis for structures involving discontinuous response and small failure probability. Therefore, this paper proposes a novel reliability analysis method called AGPC-IS for such structures, which combines adaptive Gaussian process classification (GPC) and adaptive-kernel-density-estimation-based IS. In AGPC-IS, an efficient adaptive strategy for design of experiments (DoE), taking into consideration the classification uncertainty, the sampling uniformity and the regional classification accuracy improvement, is developed with the purpose of improving the accuracy of Gaussian process classifier. The adaptive kernel density estimation is introduced for constructing the quasi-optimal density function of IS. In addition, a novel and more precise stopping criterion is also developed from the perspective of the stability of failure probability estimation. The efficiency, superiority and practicability of AGPC-IS are verified by three examples.

On Teaching of Computer-Software Field Using Smoothing Methodology (평활 방법론이 적용될 수 있는 컴퓨터-소프트웨어 교육분야 제안)

  • Lee Seung-Woo
    • Journal for History of Mathematics
    • /
    • v.19 no.3
    • /
    • pp.113-122
    • /
    • 2006
  • We investigate the mathematical background, statistical methodology, and the teaching of computer-software field using smoothing methodology in this paper. Also we investigate conception and methodology of histogram, kernel density estimator, adaptive kernel estimator, bandwidth selection based on mathematics and statistics.

  • PDF

Comparison Study of Kernel Density Estimation according to Various Bandwidth Selectors (다양한 대역폭 선택법에 따른 커널밀도추정의 비교 연구)

  • Kang, Young-Jin;Noh, Yoojeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.173-181
    • /
    • 2019
  • To estimate probabilistic distribution function from experimental data, kernel density estimation(KDE) is mostly used in cases when data is insufficient. The estimated distribution using KDE depends on bandwidth selectors that smoothen or overfit a kernel estimator to experimental data. In this study, various bandwidth selectors such as the Silverman's rule of thumb, rule using adaptive estimates, and oversmoothing rule, were compared for accuracy and conservativeness. For this, statistical simulations were carried out using assumed true models including unimodal and multimodal distributions, and, accuracies and conservativeness of estimating distribution functions were compared according to various data. In addition, it was verified how the estimated distributions using KDE with different bandwidth selectors affect reliability analysis results through simple reliability examples.

An efficient reliability analysis strategy for low failure probability problems

  • Cao, Runan;Sun, Zhili;Wang, Jian;Guo, Fanyi
    • Structural Engineering and Mechanics
    • /
    • v.78 no.2
    • /
    • pp.209-218
    • /
    • 2021
  • For engineering, there are two major challenges in reliability analysis. First, to ensure the accuracy of simulation results, mechanical products are usually defined implicitly by complex numerical models that require time-consuming. Second, the mechanical products are fortunately designed with a large safety margin, which leads to a low failure probability. This paper proposes an efficient and high-precision adaptive active learning algorithm based on the Kriging surrogate model to deal with the problems with low failure probability and time-consuming numerical models. In order to solve the problem with multiple failure regions, the adaptive kernel-density estimation is introduced and improved. Meanwhile, a new criterion for selecting points based on the current Kriging model is proposed to improve the computational efficiency. The criterion for choosing the best sampling points considers not only the probability of misjudging the sign of the response value at a point by the Kriging model but also the distribution information at that point. In order to prevent the distance between the selected training points from too close, the correlation between training points is limited to avoid information redundancy and improve the computation efficiency of the algorithm. Finally, the efficiency and accuracy of the proposed method are verified compared with other algorithms through two academic examples and one engineering application.

Lagged Cross-Correlation of Probability Density Functions and Application to Blind Equalization

  • Kim, Namyong;Kwon, Ki-Hyeon;You, Young-Hwan
    • Journal of Communications and Networks
    • /
    • v.14 no.5
    • /
    • pp.540-545
    • /
    • 2012
  • In this paper, the lagged cross-correlation of two probability density functions constructed by kernel density estimation is proposed, and by maximizing the proposed function, adaptive filtering algorithms for supervised and unsupervised training are also introduced. From the results of simulation for blind equalization applications in multipath channels with impulsive and slowly varying direct current (DC) bias noise, it is observed that Gaussian kernel of the proposed algorithm cuts out the large errors due to impulsive noise, and the output affected by the DC bias noise can be effectively controlled by the lag ${\tau}$ intrinsically embedded in the proposed function.

Adaptive Key-point Extraction Algorithm for Segmentation-based Lane Detection Network (세그멘테이션 기반 차선 인식 네트워크를 위한 적응형 키포인트 추출 알고리즘)

  • Sang-Hyeon Lee;Duksu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.1
    • /
    • pp.1-11
    • /
    • 2023
  • Deep-learning-based image segmentation is one of the most widely employed lane detection approaches, and it requires a post-process for extracting the key points on the lanes. A general approach for key-point extraction is using a fixed threshold defined by a user. However, finding the best threshold is a manual process requiring much effort, and the best one can differ depending on the target data set (or an image). We propose a novel key-point extraction algorithm that automatically adapts to the target image without any manual threshold setting. In our adaptive key-point extraction algorithm, we propose a line-level normalization method to distinguish the lane region from the background clearly. Then, we extract a representative key point for each lane at a line (row of an image) using a kernel density estimation. To check the benefits of our approach, we applied our method to two lane-detection data sets, including TuSimple and CULane. As a result, our method achieved up to 1.80%p and 17.27% better results than using a fixed threshold in the perspectives of accuracy and distance error between the ground truth key-point and the predicted point.

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.