• Title/Summary/Keyword: two-sample problem

Search Result 330, Processing Time 0.032 seconds

Regression analysis of doubly censored failure time data with frailty time data with frailty

  • Kim Yang-Jin
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.243-248
    • /
    • 2004
  • The timings of two successive events of interest may not be measurable, instead it may be right censored or interval censored; this data structure is called doubly censored data. In the study of HIV, two such events are the infection with HIV and the onset of AIDS. These data have been analyzed by authors under the assumption that infection time and induction time are independent. This paper investigates the regression problem when two events arc modeled to allow the presence of a possible relation between two events as well as a subject-specific effect. We derive the estimation procedure based on Goetghebeur and Ryan's (2000) piecewise exponential model and Gauss-Hermite integration is applied in the EM algorithm. Simulation studies are performed to investigate the small-sample properties and the method is applied to a set of doubly censored data from an AIDS cohort study.

  • PDF

A Study for the Unit Nonresponse Calibration using Two-Phase Sampling Method

  • Yum, Joon Keun;Jung, Young Mee
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.2
    • /
    • pp.479-489
    • /
    • 2002
  • The case which applies two-phase sampling to stratification and nonresponse problem, it is a poweful and effective technique. In this paper we study the calibration estimator and its variance estimator for the population total using two-phase sampling method according to the of auxiliary information for population and sample having strong correlation with an interested variable in unit nonresponse situation. The auxiliary information that available both at first-phase and second-phase sampling can be used to improve weights by the calibration procedure. A weight which corresponds to the product of sampling weights and response probability is calculated at each phase of sampling.

Active Learning on Sparse Graph for Image Annotation

  • Li, Minxian;Tang, Jinhui;Zhao, Chunxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2650-2662
    • /
    • 2012
  • Due to the semantic gap issue, the performance of automatic image annotation is still far from satisfactory. Active learning approaches provide a possible solution to cope with this problem by selecting most effective samples to ask users to label for training. One of the key research points in active learning is how to select the most effective samples. In this paper, we propose a novel active learning approach based on sparse graph. Comparing with the existing active learning approaches, the proposed method selects the samples based on two criteria: uncertainty and representativeness. The representativeness indicates the contribution of a sample's label propagating to the other samples, while the existing approaches did not take the representativeness into consideration. Extensive experiments show that bringing the representativeness criterion into the sample selection process can significantly improve the active learning effectiveness.

Structure of Data Fusion and Nonlinear Statistical Track Data Fusion in Cooperative Engagement Capability (협동교전능력을 위한 자료융합 구조와 비선형 통계적 트랙 융합 기법)

  • Jung, Hyoyoung;Byun, Jaeuk;Lee, Saewoom;Kim, Gi-Sung;Kim, Kiseon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.1
    • /
    • pp.17-27
    • /
    • 2014
  • As the importance of Cooperative Engagement Capability and network-centric warfare has been dramatically increasing, it is necessary to develop distributed tracking systems. Under the development of distributed tracking systems, it requires tracking filters and data fusion theory for nonlinear systems. Therefore, in this paper, the problem of nonlinear track fusion, which is suitable for distributed networks, is formulated, four algorithms to solve the problem of nonlinear track fusion are introduced, and performance of introduced algorithms are analyzed. It is a main problem of nonlinear track fusion that cross-covarinaces among multiple platforms are unknown. Thus, in order to solve the problem, two techniques are introduced; a simplification technique and a approximation technique. The simplification technique that help to ignore cross-covariances includes two algorithms, i.e. the sample mean algorithm and the Millman formula algorithm, and the approximation technique to obtain approximated cross-covariances utilizes two approaches, by using analytical linearization and statistical linearization based on the sigma point approach. In simulations, BCS fusion is the most efficient scheme because it reduces RMSE by approximating cross-covariances with low complexity.

Quality Control of Two Dimensions Using Digital Image Processing and Neural Networks (디지털 영상처리와 신경망을 이용한 2차원 평면 물체 품질 제어)

  • Kim, Jin-Hwan;Seo, Bo-Hyeok;Park, Seong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2004.07d
    • /
    • pp.2580-2582
    • /
    • 2004
  • In this paper, a Neural Network(NN) based approach for classification of two dimensions images. The proposed algorithm is able to apply in the actual industry. The described diagnostic algorithm is presented to defect surface failures on tiles. A way to get data for a digital image process is several kinds of it. The tiles are scanned and the digital images are preprocessed and classified using neural networks. It is important to reduce the amount of input data with problem specific preprocessing. The auto-associative neural network is used for feature generation and selection while the probabilistic neural network is used for classification. The proposed algorithm is evaluated experimentally using one hundred of the real tile images. Sample image data to preprocess have histogram. The histogram is used as input value of probabilistic neural network. Auto-associative neural network compress input data and compressed data is classified using probabilistic neural network. Classified sample images are determined by human state. So it is intervened human subjectivity. But digital image processing and neural network are better than human classification ability. Therefore it is very useful of quality control improvement.

  • PDF

A Study on the Application of Asynchronous Team Theory for QVC and Security Assessment in a Power System (전력계통의 무효전력 제어 및 안전도 평가를 위한 Asynchronous Team 이론의 적용에 관한 연구)

  • 김두현;김상철
    • Journal of the Korean Society of Safety
    • /
    • v.12 no.3
    • /
    • pp.67-75
    • /
    • 1997
  • This paper presents a study on the application of Asynchronous Team(A-Team) theory for QVC(Reactive power control) and security assessment in a power system. Reactive power control problem is the one of optimally establishing voltage level given reactive power sources, which is very important problem to supply the demand without interruption and needs methods to alleviate a bus voltage limit violation more quickly. It can be formulated as a mixed-integer linear programming(MILP) problem without deteriorating of solution accuracy to a certain extent. The security assessment is to estimate the relative robustness of the system and deterministic approach based on AC load flow calculations is adopted to assess it, especially voltage security. A distance measure, as a measurement for voltage security, is introduced. In order to analyze the above two problem, reactive power control and static security assessment, In an integrated fashion, a new organizational structure, called an A-team, is adopted. An A-team is well-suited to the development of computer-based, multi-agent systems for operation of large-scaled power systems. In order to verify the usefulness of the suggested scheme herein, modified IEEE 30 bus system is employed as a sample system. The results of a case study are also presented.

  • PDF

Unbiasedness or Statistical Efficiency: Comparison between One-stage Tobit of MLE and Two-step Tobit of OLS

  • Park, Sun-Young
    • International Journal of Human Ecology
    • /
    • v.4 no.2
    • /
    • pp.77-87
    • /
    • 2003
  • This paper tried to construct statistical and econometric models on the basis of economic theory in order to discuss the issue of statistical efficiency and unbiasedness including the sample selection bias correcting problem. Comparative analytical tool were one stage Tobit of Maximum Likelihood estimation and Heckman's two-step Tobit of Ordinary Least Squares. The results showed that the adequacy of model for the analysis on demand and choice, we believe that there is no big difference in explanatory variables between the first selection model and the second linear probability model. Since the Lambda, the self- selectivity correction factor, in the Type II Tobit is not statistically significant, there is no self-selectivity in the Type II Tobit model, indicating that Type I Tobit model would give us better explanation in the demand for and choice which is less complicated statistical method rather than type II model.

A Multivariate Calibration Procedure When the Standard Measurement is Also Subject to Error (표준 측정치의 오차를 고려한 다변량 계기 교정 절차)

  • Lee, Seung-Hoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.19 no.2
    • /
    • pp.35-41
    • /
    • 1993
  • Statistical calibration is a useful technique for achieving compatibility between two different measurement methods, and it usually consists of two steps : (1) estimation of the relationship between the standard and nonstandard measurements, and (2) prediction of future standard measurements using the estimated relationship and observed nonstandard measurements. A predictive multivariate errors-in-variables model is presented for the multivariate calibration problem in which the standard as well as the nonstandard measurements are subject to error. For the estimation of the relationship between the two measurements, the maximum likelihood (ML) estimation method is considered. It is shown that the direct and the inverse predictors for the future unknown standard measurement are the same under ML estimation. Based upon large-sample approximations, the mean square error of the predictor is derived.

  • PDF

Envelope empirical likelihood ratio for the difference of two location parameters with constraints of symmetry

  • Kim, Kyoung-Mi;Zhou, Mai
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2002.06a
    • /
    • pp.51-73
    • /
    • 2002
  • Empirical likelihood ratio method is a new technique in nonparametric inference developed by A. Owen (1988, 2001). Sometimes empirical likelihood has difficulties to define itself. As such a case in point, we discuss the way to define a modified empirical likelihood for the location of symmetry using well-known points of symmetry as a side conditions. The side condition of symmetry is defined through a finite subset of the infinite set of constraints. The modified empirical likelihood under symmetry studied in this paper is to construct a constrained parameter space $\theta+$ of distributions imposing known symmetry as side information. We show that the usual asymptotic theory (Wilks theorem) still hold for the empirical likelihood ratio on the constrained parameter space and the asymptotic distribution of the empirical NPMLE of difference of two symmetric points is obtained.

  • PDF

Test procedures for the mean and variance simultaneously under normality

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.563-574
    • /
    • 2016
  • In this study, we propose several simultaneous tests to detect the difference between means and variances for the two-sample problem when the underlying distribution is normal. For this, we apply the likelihood ratio principle and propose a likelihood ratio test. We then consider a union-intersection test after identifying the likelihood statistic, a product of two individual likelihood statistics, to test the individual sub-null hypotheses. By noting that the union-intersection test can be considered a simultaneous test with combination function, also we propose simultaneous tests with combination functions to combine individual tests for each sub-null hypothesis. We apply the permutation principle to obtain the null distributions. We then provide an example to illustrate our proposed procedure and compare the efficiency among the proposed tests through a simulation study. We discuss some interesting features related to the simultaneous test as concluding remarks. Finally we show the expression of the likelihood ratio statistic with a product of two individual likelihood ratio statistics.