• 제목/요약/키워드: Statistic Approach

Search Result 147, Processing Time 0.025 seconds

A Simulation Approach for Testing Non-hierarchical Log-linear Models

  • Park, Hyun-Jip;Hong, Chong-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.2
    • /
    • pp.357-366
    • /
    • 1999
  • Let us assume that two different log-linear models are selected by various model selection methods. When these are non-hierarchical it is not easy to choose one of these models. In this paper the well-known Cox's statistic is applied to compare these non-hierarchical log-linear models. Since it is impossible to obtain the analytic solution about the problem we proposed a alternative method by extending Pesaran and pesaran's (1993) simulation approach. We find that the values of proposed test statistic and the estimates are very much stable with some empirical results.

  • PDF

Testing NRBU Class of Life Distributions Using a Goodness of Fit Approach

  • El-Arishy, S.M.;Diab, L.S.;Alim, N.A. Abdul
    • International Journal of Reliability and Applications
    • /
    • v.7 no.2
    • /
    • pp.141-153
    • /
    • 2006
  • In this paper, we present the U-Statistic test for testing exponentiality against new renewal better than used (NRBU) based on a goodness of fit approach. Selected critical values are tabulated for sample sizes n=5(1)30(10)50. The asymptotic Pitman relative efficiency relative to (NRBU) test given in the work of Mahmoud et all (2003) is studied. The power estimates of this test for some commonly used life distributions in reliability are also calculated. Some of real examples are given to elucidate the use of the proposed test statistic in the reliability analysis. The problem in case of right censored data is also handled.

  • PDF

Dynamic rt-VBR Traffic Characterization using Sub-Sum Constraint Function (Sub-Sum Constraint Function을 이용한 동적 실시간 VBR 트래픽 특성화)

  • 김중연;정재일
    • Proceedings of the IEEK Conference
    • /
    • 2000.11a
    • /
    • pp.217-220
    • /
    • 2000
  • This paper studies a real-time VBR traffic characterization. There are two big approaches to determine traffic. One is a statistic approach and the other is a deterministic approach. This paper proposes a new constraint function, what we called “Sub-Sum Constraint Function”(SSCF). This function is mainly based on a deterministic approach and uses a statistic approach. It predicts and calculates the next rate with a present information about the stream. SSCF captures the intuitive bounded by a rate lower than its peak rate and closer to its long-term average rate. This model makes a order of the constraint function much less than any other works (O(n)). It can also be mapped on a token bucket algorithm which consists of r (token rate) and b (token depth). We use a concept, EB(effective bandwidth) for a utility of our function and comparing with other techniques such as CBR, average VBR. We simulated 21 multimedia sources for verifying the utility of our function.

  • PDF

Influence Measures for a Test Statistic on Independence of Two Random Vectors

  • Jung Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.3
    • /
    • pp.635-642
    • /
    • 2005
  • In statistical diagnostics a large number of influence measures have been proposed for identifying outliers and influential observations. However it seems to be few accounts of the influence diagnostics on test statistics. We study influence analysis on the likelihood ratio test statistic whether the two sets of variables are uncorrelated with one another or not. The influence of observations is measured using the case-deletion approach, the influence function. We compared the proposed influence measures through two illustrative examples.

A Selectively Cumulative Sum(S-CUSUM) Control Chart (선택적 누적합(S-CUSUM) 관리도)

  • Lim, Tae-Jin
    • Journal of Korean Society for Quality Management
    • /
    • v.33 no.3
    • /
    • pp.126-134
    • /
    • 2005
  • This paper proposes a selectively cumulative sum(S-CUSUM) control chart for detecting shifts in the process mean. The basic idea of the S-CUSUM chart is to accumulate previous samples selectively in order to increase the sensitivity. The S-CUSUM chart employs a threshold limit to determine whether to accumulate previous samples or not. Consecutive samples with control statistics out of the threshold limit are to be accumulated to calculate a standardized control statistic. If the control statistic falls within the threshold limit, only the next sample is to be used. During the whole sampling process, the S-CUSUM chart produces an 'out-of-control' signal either when any control statistic falls outside the control limit or when L -consecutive control statistics fall outside the threshold limit. The number L is a decision variable and is called a 'control length'. A Markov chain approach is employed to describe the S-CUSUM sampling process. Formulae for the steady state probabilities and the Average Run Length(ARL) during an in-control state are derived in closed forms. Some properties useful for designing statistical parameters are also derived and a statistical design procedure for the S-CUSUM chart is proposed. Comparative studies show that the proposed S-CUSUM chart is uniformly superior to the CUSUM chart or the Exponentially Weighted Moving Average(EWMA) chart with respect to the ARL performance.

An Application of the Clustering Threshold Gradient Descent Regularization Method for Selecting Genes in Predicting the Survival Time of Lung Carcinomas

  • Lee, Seung-Yeoun;Kim, Young-Chul
    • Genomics & Informatics
    • /
    • v.5 no.3
    • /
    • pp.95-101
    • /
    • 2007
  • In this paper, we consider the variable selection methods in the Cox model when a large number of gene expression levels are involved with survival time. Deciding which genes are associated with survival time has been a challenging problem because of the large number of genes and relatively small sample size (n<

Bearing fault detection through multiscale wavelet scalogram-based SPC

  • Jung, Uk;Koh, Bong-Hwan
    • Smart Structures and Systems
    • /
    • v.14 no.3
    • /
    • pp.377-395
    • /
    • 2014
  • Vibration-based fault detection and condition monitoring of rotating machinery, using statistical process control (SPC) combined with statistical pattern recognition methodology, has been widely investigated by many researchers. In particular, the discrete wavelet transform (DWT) is considered as a powerful tool for feature extraction in detecting fault on rotating machinery. Although DWT significantly reduces the dimensionality of the data, the number of retained wavelet features can still be significantly large. Then, the use of standard multivariate SPC techniques is not advised, because the sample covariance matrix is likely to be singular, so that the common multivariate statistics cannot be calculated. Even though many feature-based SPC methods have been introduced to tackle this deficiency, most methods require a parametric distributional assumption that restricts their feasibility to specific problems of process control, and thus limit their application. This study proposes a nonparametric multivariate control chart method, based on multiscale wavelet scalogram (MWS) features, that overcomes the limitation posed by the parametric assumption in existing SPC methods. The presented approach takes advantage of multi-resolution analysis using DWT, and obtains MWS features with significantly low dimensionality. We calculate Hotelling's $T^2$-type monitoring statistic using MWS, which has enough damage-discrimination ability. A bootstrap approach is used to determine the upper control limit of the monitoring statistic, without any distributional assumption. Numerical simulations demonstrate the performance of the proposed control charting method, under various damage-level scenarios for a bearing system.

A PERMUTATION APPROACH TO THE BEHRENS-FISHER PROBLEM

  • Proschan, Michael-A.;, Dean-A.
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.1
    • /
    • pp.79-97
    • /
    • 2004
  • We propose a permutation approach to the classic Behrens-Fisher problem of comparing two means in the presence of unequal variances. It is motivated by the observation that a paired test is valid whether or not the variances are equal. Rather than using a single arbitrary pairing of the data, we average over all possible pairings. We do this in both a parametric and nonparametric setting. When the sample sizes are equal, the parametric version is equivalent to referral of the unpaired t-statistic to a t-table with half the usual degrees of freedom. The derivation provides an interesting representation of the unpaired t-statistic in terms of all possible pairwise t-statistics. The nonparametric version uses the same idea of considering all different pairings of data from the two groups, but applies it to a permutation test setting. Each pairing gives rise to a permutation distribution obtained by relabeling treatment and control within pairs. The totality of different mean differences across all possible pairings and relabelings forms the null distribution upon which the p-value is based. The conservatism of this procedure diminishes as the disparity in variances increases, disappearing completely when the ratio of the smaller to larger variance approaches 0. The nonparametric procedure behaves increasingly like a paired t-test as the sample sizes increase.

Competition of Islamic Bank in Indonesia

  • Humairoh, Syafaqatul;Usman, Hardius
    • Journal of Distribution Science
    • /
    • v.14 no.6
    • /
    • pp.39-44
    • /
    • 2016
  • Purpose - This paper aims to study the competition that occurs in the Islamic Banking industry and to analyze the variables that affect the total revenue of Islamic Banking in Indonesia. Research Design, Data and Methodology - This study observed 10Islamic banks for the period 2010-2013. The annual data are taken from Direktori Perbankan Indonesia, published by Bank Indonesia, and annual report of the observed banks. In analyzing data, Panzar Rosse Approach was applied to analyze the type of Islamic Bank Market and Panel Regression Model for the estimated co-efficients has been used in the Panzar Rosse Approach. Results - Estimation model shows that all the banking cost elements such as the price of capital, unit price of labor, and unit prices of funds have significant positive correlation to Revenue as a dependent variable. The estimated value of H-statistic for the period 2010-2013 is 0.69. It can be interpreted that Islamic banking market in Indonesia shows monopolistic competition. Price of capital and funds has statistically significant effect on Bank's Revenue. Conclusions - The study revealed that the Islamic banking market competition in Indonesia is monopolistic and the major contribution to the H-statistic comes from mainly price of funds.

Transformation Approach to Model Online Gaming Traffic

  • Shin, Kwang-Sik;Kim, Jin-Hyuk;Sohn, Kang-Min;Park, Chang-Joon;Choi, Sang-Bang
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.219-229
    • /
    • 2011
  • In this paper, we propose a transformation scheme used to analyze online gaming traffic properties and develop a traffic model. We analyze the packet size and the inter departure time distributions of a popular first-person shooter game (Left 4 Dead) and a massively multiplayer online role-playing game (World of Warcraft) in order to compare them to the existing scheme. Recent online gaming traffic is erratically distributed, so it is very difficult to analyze. Therefore, our research focuses on a transformation scheme to obtain new smooth patterns from a messy dataset. It extracts relatively heavy-weighted density data and then transforms them into a corresponding dataset domain to obtain a simplified graph. We compare the analytical model histogram, the chi-square statistic, and the quantile-quantile plot of the proposed scheme to an existing scheme. The results show that the proposed scheme demonstrates a good fit in all parts. The chi-square statistic of our scheme for the Left 4 Dead packet size distribution is less than one ninth of the existing one when dealing with erratic traffic.