• Title/Summary/Keyword: kernel distribution

Search Result 260, Processing Time 0.027 seconds

ON A STABILITY OF PEXIDERIZED EXPONENTIAL EQUATION

  • Chung, Jae-Young
    • Bulletin of the Korean Mathematical Society
    • /
    • v.46 no.2
    • /
    • pp.295-301
    • /
    • 2009
  • We prove the Hyers-Ulam stability of a Pexiderized exponential equation of mappings f, g, h : $G{\times}S{\rightarrow}{\mathbb{C}}$, where G is an abelian group and S is a commutative semigroup which is divisible by 2. As an application we obtain a stability theorem for Pexiderized exponential equation in Schwartz distributions.

On the Plug-in Bandwidth Selectors in Kernel Density Estimation

  • Park, Byeong-Uk
    • Journal of the Korean Statistical Society
    • /
    • v.18 no.2
    • /
    • pp.107-117
    • /
    • 1989
  • A stronger result than that of Park and Marron (1994) is proved here on the asymptotic distribution of the plug-in bandwidth selector. The new result is that the plug-in bandwidth selector may have the rate of convergence ($n^{-4/13}$ with less smoothness conditions on the unknown density functions than as described in Park and Marron's paper. Together with this, a class of various plug-in bandwidth selectors are considered and their asymptotic distributions are given. Finally, some ideas of possible improvements on those plug-in bandwidth selectors are provided.

  • PDF

Weak Convergence of U-empirical Processes for Two Sample Case with Applications

  • Park, Hyo-Il;Na, Jong-Hwa
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.1
    • /
    • pp.109-120
    • /
    • 2002
  • In this paper, we show the weak convergence of U-empirical processes for two sample problem. We use the result to show the asymptotic normality for the generalized dodges-Lehmann estimates with the Bahadur representation for quantifies of U-empirical distributions. Also we consider the asymptotic normality for the test statistics in a simple way.

Power Comparison between Methods of Empirical Process and a Kernel Density Estimator for the Test of Distribution Change (분포변화 검정에서 경험확률과정과 커널밀도함수추정량의 검정력 비교)

  • Na, Seong-Ryong;Park, Hyeon-Ah
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.245-255
    • /
    • 2011
  • There are two nonparametric methods that use empirical distribution functions and probability density estimators for the test of the distribution change of data. In this paper we investigate the two methods precisely and summarize the results of previous research. We assume several probability models to make a simulation study of the change point analysis and to examine the finite sample behavior of the two methods. Empirical powers are compared to verify which is better for each model.

Empirical analysis of strategy selection for the technology leading and technology catch-up in the IT industry

  • Byung-Sun Cho;Sang-Sup Cho;Sung-Sik Shin;Gang-hoon Kim
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.267-276
    • /
    • 2023
  • R&D strategies of companies with low and high technological levels are discussed based on the concept of technology convergence and divergence. However, empirically detecting enterprise technology convergence in the distribution of enterprise technology (total productivity increase) over time and identifying key change factors are challenging. This study used a novel statistical indicator that captures the internal technology distribution change with a single number to clearly measure the technology distribution peak as a change in critical bandwidth for enterprise technology convergence and presented it as evidence of each technology convergence or divergence. Furthermore, this study applied the quantitative technology convergence identification method. Technology convergence appeared from the separation of total corporate productivity distribution of 69 IT companies in Korea in 2019-2020 rather than in 2015-2016. Results indicated that when the total technological level was separated from the technology leading and technology catch-up, IT companies were found to be pursuing R&D strategies for technology catch-up.

Comparison study on kernel type estimators of discontinuous log-variance (불연속 로그분산함수의 커널추정량들의 비교 연구)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.1
    • /
    • pp.87-95
    • /
    • 2014
  • In the regression model, Kang and Huh (2006) studied the estimation of the discontinuous variance function using the Nadaraya-Watson estimator with the squared residuals. The local linear estimator of the log-variance function, which may have the whole real number, was proposed by Huh (2013) based on the kernel weighted local-likelihood of the ${\chi}^2$-distribution. Chen et al. (2009) estimated the continuous variance function using the local linear fit with the log-squared residuals. In this paper, the estimator of the discontinuous log-variance function itself or its derivative using Chen et al. (2009)'s estimator. Numerical works investigate the performances of the estimators with simulated examples.

Initialization of Fuzzy C-Means Using Kernel Density Estimation (커널 밀도 추정을 이용한 Fuzzy C-Means의 초기화)

  • Heo, Gyeong-Yong;Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1659-1664
    • /
    • 2011
  • Fuzzy C-Means (FCM) is one of the most widely used clustering algorithms and has been used in many applications successfully. However, FCM has some shortcomings and initial prototype selection is one of them. As FCM is only guaranteed to converge on a local optimum, different initial prototype results in different clustering. Therefore, much care should be given to the selection of initial prototype. In this paper, a new initialization method for FCM using kernel density estimation (KDE) is proposed to resolve the initialization problem. KDE can be used to estimate non-parametric data distribution and is useful in estimating local density. After KDE, in the proposed method, one initial point is placed at the most dense region and the density of that region is reduced. By iterating the process, initial prototype can be obtained. The initial prototype such obtained showed better result than the randomly selected one commonly used in FCM, which was demonstrated by experimental results.

An Efficiency Assessment for Reflectance Normalization of RapidEye Employing BRD Components of Wide-Swath satellite

  • Kim, Sang-Il;Han, Kyung-Soo;Yeom, Jong-Min
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.303-314
    • /
    • 2011
  • Surface albedo is an important parameter of the surface energy budget, and its accurate quantification is of major interest to the global climate modeling community. Therefore, in this paper, we consider the direct solution of kernel based bidirectional reflectance distribution function (BRDF) models for retrieval of normalized reflectance of high resolution satellite. The BRD effects can be seen in satellite data having a wide swath such as SPOT/VGT (VEGETATION) have sufficient angular sampling, but high resolution satellites are impossible to obtain sufficient angular sampling over a pixel during short period because of their narrow swath scanning when applying semi-empirical model. This gives a difficulty to run BRDF model inferring the reflectance normalization of high resolution satellites. The principal purpose of the study is to estimate normalized reflectance of high resolution satellite (RapidEye) through BRDF components from SPOT/VGT. We use semi-empirical BRDF model to estimated BRDF components from SPOT/VGT and reflectance normalization of RapidEye. This study used SPOT/VGT satellite data acquired in the S1 (daily) data, and within this study is the multispectral sensor RapidEye. Isotropic value such as the normalized reflectance was closely related to the BRDF parameters and the kernels. Also, we show scatter plot of the SPOT/VGT and RapidEye isotropic value relationship. The linear relationship between the two linear regression analysis is performed by using the parameters of SPOTNGT like as isotropic value, geometric value and volumetric scattering value, and the kernel values of RapidEye like as geometric and volumetric scattering kernel Because BRDF parameters are difficult to directly calculate from high resolution satellites, we use to BRDF parameter of SPOT/VGT. Also, we make a decision of weighting for geometric value, volumetric scattering value and error through regression models. As a result, the weighting through linear regression analysis produced good agreement. For all sites, the SPOT/VGT isotropic and RapidEye isotropic values had the high correlation (RMSE, bias), and generally are very consistent.

Probabilistic Prediction of Estimated Ultimate Recovery in Shale Reservoir using Kernel Density Function (셰일 저류층에서의 핵밀도 함수를 이용한 확률론적 궁극가채량 예측)

  • Shin, Hyo-Jin;Hwang, Ji-Yu;Lim, Jong-Se
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.3
    • /
    • pp.61-69
    • /
    • 2017
  • The commercial development of unconventional gas is pursued in North America because it is more feasible owing to the technology required to improve productivity. Shale reservoir have low permeability and gas production can be carried out through cracks generated by hydraulic fracturing. The decline rate during the initial production period is high, but very low latter on, there are significant variations from the initial production behavior. Therefore, in the prediction of the production rate using deterministic decline curve analysis(DCA), it is not possible to consider the uncertainty in the production behavior. In this study, production rate of the Eagle Ford shale is predicted by Arps Hyperbolic and Modified SEPD. To minimize the uncertainty in predicting the Estimated Ultimate Recovery(EUR), Monte Carlo simulation is used to multi-wells analysis. Also, kernel density function is applied to determine probability distribution of decline curve factors without any assumption.

Estimation of P(X > Y) when X and Y are dependent random variables using different bivariate sampling schemes

  • Samawi, Hani M.;Helu, Amal;Rochani, Haresh D.;Yin, Jingjing;Linder, Daniel
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.5
    • /
    • pp.385-397
    • /
    • 2016
  • The stress-strength models have been intensively investigated in the literature in regards of estimating the reliability ${\theta}$ = P(X > Y) using parametric and nonparametric approaches under different sampling schemes when X and Y are independent random variables. In this paper, we consider the problem of estimating ${\theta}$ when (X, Y) are dependent random variables with a bivariate underlying distribution. The empirical and kernel estimates of ${\theta}$ = P(X > Y), based on bivariate ranked set sampling (BVRSS) are considered, when (X, Y) are paired dependent continuous random variables. The estimators obtained are compared to their counterpart, bivariate simple random sampling (BVSRS), via the bias and mean square error (MSE). We demonstrate that the suggested estimators based on BVRSS are more efficient than those based on BVSRS. A simulation study is conducted to gain insight into the performance of the proposed estimators. A real data example is provided to illustrate the process.