• Title/Summary/Keyword: inverse probability weight

Search Result 5, Processing Time 0.018 seconds

Estimating causal effect of multi-valued treatment from observational survival data

  • Kim, Bongseong;Kim, Ji-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.6
    • /
    • pp.675-688
    • /
    • 2020
  • In survival analysis of observational data, the inverse probability weighting method and the Cox proportional hazards model are widely used when estimating the causal effects of multiple-valued treatment. In this paper, the two kinds of weights have been examined in the inverse probability weighting method. We explain the reason why the stabilized weight is more appropriate when an inverse probability weighting method using the generalized propensity score is applied. We also emphasize that a marginal hazard ratio and the conditional hazard ratio should be distinguished when defining the hazard ratio as a treatment effect under the Cox proportional hazards model. A simulation study based on real data is conducted to provide concrete numerical evidence.

Overview of estimating the average treatment effect using dimension reduction methods (차원축소 방법을 이용한 평균처리효과 추정에 대한 개요)

  • Mijeong Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.4
    • /
    • pp.323-335
    • /
    • 2023
  • In causal analysis of high dimensional data, it is important to reduce the dimension of covariates and transform them appropriately to control confounders that affect treatment and potential outcomes. The augmented inverse probability weighting (AIPW) method is mainly used for estimation of average treatment effect (ATE). AIPW estimator can be obtained by using estimated propensity score and outcome model. ATE estimator can be inconsistent or have large asymptotic variance when using estimated propensity score and outcome model obtained by parametric methods that includes all covariates, especially for high dimensional data. For this reason, an ATE estimation using an appropriate dimension reduction method and semiparametric model for high dimensional data is attracting attention. Semiparametric method or sparse sufficient dimensionality reduction method can be uesd for dimension reduction for the estimation of propensity score and outcome model. Recently, another method has been proposed that does not use propensity score and outcome regression. After reducing dimension of covariates, ATE estimation can be performed using matching. Among the studies on ATE estimation methods for high dimensional data, four recently proposed studies will be introduced, and how to interpret the estimated ATE will be discussed.

Variance Estimation for General Weight-Adjusted Estimator (가중치 보정 추정량에 대한 일반적인 분산 추정법 연구)

  • Kim, Jae-Kwang
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.2
    • /
    • pp.281-290
    • /
    • 2007
  • Linear estimator, a weighted sum of the sample observation, is commonly adopted to estimate the finite population parameters such as population totals in survey sampling. The weight for a sampled unit is often constructed by multiplying the base weight, which is the inverse of the first-order inclusion probability, by an adjustment term that takes into account of the auxiliary information obtained throughout the population. The linear estimator using the weight adjustment is often more efficient than the one using only the bare weight, but its valiance estimation is more complicated. We discuss variance estimation for a general class of weight-adjusted estimator. By identifying that the weight-adjusted estimator can be viewed as a function of estimated nuisance parameters, where the nuisance parameters were used to incorporate the auxiliary information, we derive a linearization of the weight-adjusted estimator using a Taylor expansion. The method proposed here is quite general and can be applied to wide class of the weight-adjusted estimators. Some examples and results from a simulation study are presented.

Estimation of Drought Rainfall by Regional Frequency Analysis Using L and LH-Moments (II) - On the method of LH-moments - (L 및 LH-모멘트법과 지역빈도분석에 의한 가뭄우량의 추정 (II)- LH-모멘트법을 중심으로 -)

  • Lee, Soon-Hyuk;Yoon , Seong-Soo;Maeng , Sung-Jin;Ryoo , Kyong-Sik;Joo , Ho-Kil;Park , Jin-Seon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.46 no.5
    • /
    • pp.27-39
    • /
    • 2004
  • In the first part of this study, five homogeneous regions in view of topographical and geographically homogeneous aspects except Jeju and Ulreung islands in Korea were accomplished by K-means clustering method. A total of 57 rain gauges were used for the regional frequency analysis with minimum rainfall series for the consecutive durations. Generalized Extreme Value distribution was confirmed as an optimal one among applied distributions. Drought rainfalls following the return periods were estimated by at-site and regional frequency analysis using L-moments method. It was confirmed that the design drought rainfalls estimated by the regional frequency analysis were shown to be more appropriate than those by the at-site frequency analysis. In the second part of this study, LH-moment ratio diagram and the Kolmogorov-Smirnov test on the Gumbel (GUM), Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA) distributions were accomplished to get optimal probability distribution. Design drought rainfalls were estimated by both at-site and regional frequency analysis using LH-moments and GEV distribution, which was confirmed as an optimal one among applied distributions. Design rainfalls were estimated by at-site and regional frequency analysis using LH-moments, the observed and simulated data resulted from Monte Carlotechniques. Design drought rainfalls derived by regional frequency analysis using L1, L2, L3 and L4-moments (LH-moments) method have shown higher reliability than those of at-site frequency analysis in view of RRMSE (Relative Root-Mean-Square Error), RBIAS (Relative Bias) and RR (Relative Reduction) for the estimated design drought rainfalls. Relative efficiency were calculated for the judgment of relative merits and demerits for the design drought rainfalls derived by regional frequency analysis using L-moments and L1, L2, L3 and L4-moments applied in the first report and second report of this study, respectively. Consequently, design drought rainfalls derived by regional frequency analysis using L-moments were shown as more reliable than those using LH-moments. Finally, design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were derived by regional frequency analysis using L-moments, which was confirmed as a more reliable method through this study. Maps for the design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were accomplished by the method of inverse distance weight and Arc-View, which is one of GIS techniques.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.