• Title/Summary/Keyword: statistical approach

Search Result 2,335, Processing Time 0.027 seconds

A Method to Determine the Final Importance of Customer Attributes Considering Statistical Significance (통계적 유의성을 고려하여 고객 요구속성의 중요도를 산정하는 방법)

  • Kim, Kyung-Mee O.
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.3
    • /
    • pp.1-12
    • /
    • 2008
  • Obtaining the accurate final importance of each customer attribute (CA) is very important in the house of quality(HOQ), because it is deployed to the quality of the final product or service through the quality function deployment(QFD). The final importance is often calculated by the multiplication of the relative importance rate and the competitive priority rate. Traditionally, the sample mean is used for estimating two rates but the dispersion is ignored. This paper proposes a new approach that incorporates statistical significance to consider the dispersion of rates in determining the final importance of CA. The approach is illustrated with a design of car door for each case of crisp and fuzzy numbers.

On Evaluation of Bioequivalence for Highly Variable Drugs (변이가 큰 약물의 생물학적 동등성 평가에 관한 연구)

  • Jeong, Gyu-Jin;Park, Sang-Gue
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1055-1076
    • /
    • 2011
  • This paper reviews the definition of highly variable drug(HVD), the present regulatory recommendations and the approaches proposed in the literature to deal with the bioequivalence issues of HVD. The concept and the statistical approach of scaled average bioequivalence(SABE) is introduced and discussed with the current regulatory methods. The recommendations for SABE approach are proposed and the further study topics related to HVDs are also presented.

An Empirical Characteristic Function Approach to Selecting a Transformation to Normality

  • Yeo, In-Kwon;Johnson, Richard A.;Deng, XinWei
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.3
    • /
    • pp.213-224
    • /
    • 2014
  • In this paper, we study the problem of transforming to normality. We propose to estimate the transformation parameter by minimizing a weighted squared distance between the empirical characteristic function of transformed data and the characteristic function of the normal distribution. Our approach also allows for other symmetric target characteristic functions. Asymptotics are established for a random sample selected from an unknown distribution. The proofs show that the weight function $t^{-2}$ needs to be modified to have thinner tails. We also propose the method to compute the influence function for M-equation taking the form of U-statistics. The influence function calculations and a small Monte Carlo simulation show that our estimates are less sensitive to a few outliers than the maximum likelihood estimates.

The restricted maximum likelihood estimation of a censored regression model

  • Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.3
    • /
    • pp.291-301
    • /
    • 2017
  • It is well known in a small sample that the maximum likelihood (ML) approach for variance components in the general linear model yields estimates that are biased downward. The ML estimate of residual variance tends to be downwardly biased. The underestimation of residual variance, which has implications for the estimation of marginal effects and asymptotic standard error of estimates, seems to be more serious in some limited dependent variable models, as shown by some researchers. An alternative frequentist's approach may be restricted or residual maximum likelihood (REML), which accounts for the loss in degrees of freedom and gives an unbiased estimate of residual variance. In this situation, the REML estimator is derived in a censored regression model. A small sample the REML is shown to provide proper inference on regression coefficients.

Bayesian analysis of financial volatilities addressing long-memory, conditional heteroscedasticity and skewed error distribution

  • Oh, Rosy;Shin, Dong Wan;Oh, Man-Suk
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.5
    • /
    • pp.507-518
    • /
    • 2017
  • Volatility plays a crucial role in theory and applications of asset pricing, optimal portfolio allocation, and risk management. This paper proposes a combined model of autoregressive moving average (ARFIMA), generalized autoregressive conditional heteroscedasticity (GRACH), and skewed-t error distribution to accommodate important features of volatility data; long memory, heteroscedasticity, and asymmetric error distribution. A fully Bayesian approach is proposed to estimate the parameters of the model simultaneously, which yields parameter estimates satisfying necessary constraints in the model. The approach can be easily implemented using a free and user-friendly software JAGS to generate Markov chain Monte Carlo samples from the joint posterior distribution of the parameters. The method is illustrated by using a daily volatility index from Chicago Board Options Exchange (CBOE). JAGS codes for model specification is provided in the Appendix.

Reject Inference of Incomplete Data Using a Normal Mixture Model

  • Song, Ju-Won
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.2
    • /
    • pp.425-433
    • /
    • 2011
  • Reject inference in credit scoring is a statistical approach to adjust for nonrandom sample bias due to rejected applicants. Function estimation approaches are based on the assumption that rejected applicants are not necessary to be included in the estimation, when the missing data mechanism is missing at random. On the other hand, the density estimation approach by using mixture models indicates that reject inference should include rejected applicants in the model. When mixture models are chosen for reject inference, it is often assumed that data follow a normal distribution. If data include missing values, an application of the normal mixture model to fully observed cases may cause another sample bias due to missing values. We extend reject inference by a multivariate normal mixture model to handle incomplete characteristic variables. A simulation study shows that inclusion of incomplete characteristic variables outperforms the function estimation approaches.

A Case Study of Six Sigma Application on Market Analysis (식스시그마를 응용한 시장분석 사례 연구)

  • Choi, Gyoung-Seok;Yun, Won-Young
    • IE interfaces
    • /
    • v.15 no.4
    • /
    • pp.409-425
    • /
    • 2002
  • This case study provides a market analysis methodology for overseas markets by applying statistical tools and the Six Sigma approach. The study suggests a procedure with seven steps to improve brands position in the market. These steps consist of interviewing consumers and floor salesmen of stores, surveying, analysis of correlation between brand position and customers satisfaction, analysis of relationship with companies and customer satisfaction factors, analysis of the customer satisfaction gap between companies, evaluating the importance of customer satisfaction factors, and suggestion for enhancement of brand position. The Six Sigma approach such as "Define", "Measure" and "Analyze" is used in this procedure, which is part of Six Sigma procedure, D-M-A-I-C (Define, Measure, Analyze, Improve, Control). Minitab and SAS are used for the statistical analysis.

On the Interval Estimation of the Difference between Independent Proportions with Rare Events

  • im, Yongdai;Choi, Daewoo
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.481-487
    • /
    • 2000
  • When we construct an interval estimate of two independent proportions with rare events, the standard approach based on the normal approximation behaves badly in many cases. The problem becomes more severe when no success observations are observed on both groups. In this paper, we compare two alternative methods of constructing a confidence interval of the difference of two independent proportions by use of simulation. One is based on the profile likelihood and the other is the Bayesian probability interval. It is shown in this paper that the Bayesian interval estimator is easy to be implemented and performs almost identical to the best frequentist's method -the profile likelihood approach.

  • PDF

Computer graphics approach to two-way ANOVA (컴퓨터 그래픽스에 의한 이원 분산분석)

  • 허문열
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.1
    • /
    • pp.75-87
    • /
    • 1995
  • Computer graphics approach is a powerful tool when we are to explore the effects of the change of a part of the data, or the effects of the alteration of the characteristics of the statistical model currently employed. The paper describes the methods to implement dynamic graphics for the process of analysis of variance, and the methods to graphically represent ANOVA type data. The paper the describes a dynamic graphics software developed by the author for two-way ANOVA model.

  • PDF

Simple Recursive Approach for Detecting Spatial Clusters

  • Kim Jeongjin;Chung Younshik;Ma Sungjoon;Yang Tae Young
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.1
    • /
    • pp.207-216
    • /
    • 2005
  • A binary segmentation procedure is a simple recursive approach to detect clusters and provide inferences for the study space when the shape of the clusters and the number of clusters are unknown. The procedure involves a sequence of nested hypothesis tests of a single cluster versus a pair of distinct clusters. The size and the shape of the clusters evolve as the procedure proceeds. The procedure allows for various growth clusters and for arbitrary baseline densities which govern the form of the hypothesis tests. A real tree data is used to highlight the procedure.