• Title/Summary/Keyword: point estimator

Search Result 189, Processing Time 0.023 seconds

Constrained Bayes and Empirical Bayes Estimator Applications in Insurance Pricing

  • Kim, Myung Joon;Kim, Yeong-Hwa
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.4
    • /
    • pp.321-327
    • /
    • 2013
  • Bayesian and empirical Bayesian methods have become quite popular in the theory and practice of statistics. However, the objective is to often produce an ensemble of parameter estimates as well as to produce the histogram of the estimates. For example, in insurance pricing, the accurate point estimates of risk for each group is necessary and also proper dispersion estimation should be considered. Well-known Bayes estimates (which is the posterior means under quadratic loss) are underdispersed as an estimate of the histogram of parameters. The adjustment of Bayes estimates to correct this problem is known as constrained Bayes estimators, which are matching the first two empirical moments. In this paper, we propose a way to apply the constrained Bayes estimators in insurance pricing, which is required to estimate accurately both location and dispersion. Also, the benefit of the constrained Bayes estimates will be discussed by analyzing real insurance accident data.

Point and interval estimation for a simple step-stress model with Type-I censored data from geometric distribution

  • Arefi, Ahmad;Razmkhah, Mostafa
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.1
    • /
    • pp.29-41
    • /
    • 2017
  • The estimation problem of expected time to failure of units is studied in a discrete set up. A simple step-stress accelerated life testing is considered with a Type-I censored sample from geometric distribution that is a commonly used distribution to model the lifetime of a device in discrete case. Maximum likelihood estimators as well as the associated distributions are derived. Exact, approximate and bootstrap approaches construct confidence intervals that are compared via a simulation study. Optimal confidence intervals are suggested in view of the expected width and coverage probability criteria. An illustrative example is also presented to explain the results of the paper. Finally, some conclusions are stated.

Local linear regression analysis for interval-valued data

  • Jang, Jungteak;Kang, Kee-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.365-376
    • /
    • 2020
  • Interval-valued data, a type of symbolic data, is given as an interval in which the observation object is not a single value. It can also occur frequently in the process of aggregating large databases into a form that is easy to manage. Various regression methods for interval-valued data have been proposed relatively recently. In this paper, we introduce a nonparametric regression model using the kernel function and a nonlinear regression model for the interval-valued data. We also propose applying the local linear regression model, one of the nonparametric methods, to the interval-valued data. Simulations based on several distributions of the center point and the range are conducted using each of the methods presented in this paper. Various conditions confirm that the performance of the proposed local linear estimator is better than the others.

A Modified Target Costing Technique to Improve Product Quality from Cost Consideration

  • Wu, Hsin-Hung
    • International Journal of Quality Innovation
    • /
    • v.6 no.2
    • /
    • pp.31-45
    • /
    • 2005
  • The target costing technique, mathematically discussed by Sauers, only uses the $C_p$ index along with Taguchi loss function and ${\bar{X}}-R$ control charts to set up goal control limits. The new specification limits derived from Taguchi loss function is linked through the $C_p$ value to ${\bar{X}}-R$ control charts to obtain goal control limits. This study further considers the reflected normal loss function as well as the $C_{pk}$ index along with its lower confidence interval in forming goal control limits. With the use of lower confidence interval to replace the point estimator of the $C_{pk}$ index and reflected normal loss function proposed by Spiring to measure the loss to society, this modified and improved target costing technique would become more robust and applicable in practice. Finally, an example is provided to illustrate how this modified and improved target costing technique works.

Smoothing Parameter Selection in Nonparametric Spectral Density Estimation

  • Kang, Kee-Hoon;Park, Byeong-U;Cho, Sin-Sup;Kim, Woo-Chul
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.231-242
    • /
    • 1995
  • In this paper we consider kernel type estimator of the spectral density at a point in the analysis of stationary time series data. The kernel entails choice of smoothing parameter called bandwidth. A data-based bandwidth choice is proposed, and it is obtained by solving an equation similar to Sheather(1986) which relates to the probability density estimation. A Monte Carlo study is done. It reveals that the spectral density estimates using the data-based bandwidths show comparatively good performance.

  • PDF

Calibration by Median Regression

  • Jinsan Yang;Lee, Seung-Ho
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.2
    • /
    • pp.265-277
    • /
    • 1999
  • Classical and inverse estimation methods are two well known methods in statistical calibration problems. When there are outliers, both methods have large MSE's and could not estimate the input value correctly. We suggest median calibration estimation based on the LD-statistics. To investigate the robust performances, the influence function of the median calibration estimator is calculated and compared with other methods. When there are outliers in the response variables, the influence function is found to be bounded. In simulation studies, the MSE's for each calibration methods are compared. The estimated inputs as well as the performance of the influence functions are calculated.

  • PDF

On an Information Theoretic Diagnostic Measure for Detecting Influential Observations in LDA

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.2
    • /
    • pp.289-301
    • /
    • 1996
  • This paper suggests a new diagnostic measure for detecting influential observations in two group linear discriminant analysis(LDA). It is developed from an information theoretic point of view using the minimum discrimination information(MDI) methodology. MDI estimator of symmetric divergence by Kullback(l967) is taken as a measure of the power of discrimination in LDA. It is shown that the effect of an observation over the power of discrimination is fully explained by the diagnostic measure. Asymptotic distribution of the proposed measure is derived as a function of independent chi-squared and standard normal variables. By means of the distributions, a couple of methods are suggested for detecting the influential observations in LDA. Performance of the suggested methods are examined through a simulation study.

  • PDF

Goodness-of-Fit Test Based on Smoothing Parameter Selection Criteria

  • Kim, Jong-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.1
    • /
    • pp.122-136
    • /
    • 1995
  • The objective of this research is to investigate the problem of goodness-of-fit testing based on nonparametric density estimation with a data-driven smoothing parameter. The small and large sample properties of a new test statistic $\hat{\lambda_a}$ is investigated. The test statistic $\hat{\lambda_a}$ is itself a smoothing parameter which is selected to minimize an estimated MISE for a truncated series estimator of the comparison density function. Therefore, this test statistic leads immediately to a point estimate of the density function th the event that $H_0$ is rejected. The limiting distribution of $\hat{\lambda_a}$ is obtained under the null hypothesis. It is also shown that this test is consistent against fixed alternatives.

  • PDF

Estimators Which Shrink Towards a Point Other Than the Origin

  • Choi, An-Jae;Kim, Jai-Young
    • Journal of the military operations research society of Korea
    • /
    • v.12 no.1
    • /
    • pp.106-121
    • /
    • 1986
  • In this study, estimators which shrink towards other than the origin are developed. We discuss the harmonic mean of the observations as a reasonable center and develop estimators which shrink the MLE(${\delta}^{\circ}$) towards the harmonic mean under the loss function $L_1$. The percentage reduction of average loss(PRAL) of derived estimators compared to the usual estimator(MLE) under the loss function $L_1$ is estimated for p=4 and 8. Computer simulation shows that the risk performance of the derived estiamtors is remarkably good when the means are greater than three.

  • PDF

Robust CUSUM test for time series of counts and its application to analyzing the polio incidence data

  • Kang, Jiwon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1565-1572
    • /
    • 2015
  • In this paper, we analyze the polio incidence data based on the Poisson autoregressive models, focusing particularly on change-point detection. Since the data include some strongly deviating observations, we employ the robust cumulative sum (CUSUM) test proposed by Kang and Song (2015) to perform the test for parameter change. Contrary to the result of Kang and Lee (2014), our data analysis indicates that there is no significant change in the case of the CUSUM test with strong robustness and the same result is obtained after ridding the polio data of outliers. We additionally consider the comparison of the forecasting performance. All the results demonstrate that the robust CUSUM test performs adequately in the presence of seemingly outliers.