• 제목/요약/키워드: inverse probability weight

검색결과 5건 처리시간 0.016초

Estimating causal effect of multi-valued treatment from observational survival data

  • Kim, Bongseong;Kim, Ji-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • 제27권6호
    • /
    • pp.675-688
    • /
    • 2020
  • In survival analysis of observational data, the inverse probability weighting method and the Cox proportional hazards model are widely used when estimating the causal effects of multiple-valued treatment. In this paper, the two kinds of weights have been examined in the inverse probability weighting method. We explain the reason why the stabilized weight is more appropriate when an inverse probability weighting method using the generalized propensity score is applied. We also emphasize that a marginal hazard ratio and the conditional hazard ratio should be distinguished when defining the hazard ratio as a treatment effect under the Cox proportional hazards model. A simulation study based on real data is conducted to provide concrete numerical evidence.

차원축소 방법을 이용한 평균처리효과 추정에 대한 개요 (Overview of estimating the average treatment effect using dimension reduction methods)

  • 김미정
    • 응용통계연구
    • /
    • 제36권4호
    • /
    • pp.323-335
    • /
    • 2023
  • 고차원 데이터의 인과 추론에서 고차원 공변량의 차원을 축소하고 적절히 변형하여 처리와 잠재 결과에 영향을 줄 수 있는 교란을 통제하는 것은 중요한 문제이다. 평균 처리 효과(average treatment effect; ATE) 추정에 있어서, 성향점수와 결과 모형 추정을 이용한 확장된 역확률 가중치 방법이 주로 사용된다. 고차원 데이터의 분석시 모든 공변량을 포함한 모수 모형을 이용하여 성향 점수와 결과 모형 추정을 할 경우, ATE 추정량이 일치성을 갖지 않거나 추정량의 분산이 큰 값을 가질 수 있다. 이런 이유로 고차원 데이터에 대한 적절한 차원 축소 방법과 준모수 모형을 이용한 ATE 방법이 주목 받고 있다. 이와 관련된 연구로는 차원 축소부분에 준모수 모형과 희소 충분 차원 축소 방법을 활용한 연구가 있다. 최근에는 성향점수와 결과 모형을 추정하지 않고, 차원 축소 후 매칭을 활용한 ATE 추정 방법도 제시되었다. 고차원 데이터의 ATE 추정 방법연구 중 최근에 제시된 네 가지 연구에 대해 소개하고, 추정치 해석시 유의할 점에 대하여 논하기로 한다.

가중치 보정 추정량에 대한 일반적인 분산 추정법 연구 (Variance Estimation for General Weight-Adjusted Estimator)

  • 김재광
    • 응용통계연구
    • /
    • 제20권2호
    • /
    • pp.281-290
    • /
    • 2007
  • 유한 모집단에서 총계 추정에는 표본의 각 관측값으로 만들어지는 선형 추정량이 사용되는데 이때 사용되는 가중치는 표본 추출 확률의 역수를 사용한 기본 가중치를 모집단 전체에서 얻어지는 보조 정보를 이용하여 보정한 형태로 종종 사용된다. 이렇게 보정된 가중치를 사용한 추정량은 그렇지 않은 추정량보다 효율이 더 좋아질 수 있는 장점이 있으나 이러한 경우 분산 추정은 더 어려워지게 된다. 본 연구에서는 보정된 가중치를 사용한 추정량의 분산 추정을 다룬다. 가중치 보정의 일반적인 형태를 밝히고 이 경우 가중치 보정항은 유한개의 장애 모수(nuisance parameter)의 함수로 나타낼 수 있으므로 이 장애 모수에 대한 테일러 전개를 사용한 분산 추정식을 구한다. 이렇게 구현된 분산 추정식은 기존의 가중치 보정 추정량뿐만 아니라 보다 일반적인 경우에서도 적용될 수 있다는 장점이 있다. 몇가지 응용 사례와 모의 실험 결과를 소개한다.

L 및 LH-모멘트법과 지역빈도분석에 의한 가뭄우량의 추정 (II)- LH-모멘트법을 중심으로 - (Estimation of Drought Rainfall by Regional Frequency Analysis Using L and LH-Moments (II) - On the method of LH-moments -)

  • 이순혁;윤성수;맹승진;류경식;주호길;박진선
    • 한국농공학회논문집
    • /
    • 제46권5호
    • /
    • pp.27-39
    • /
    • 2004
  • In the first part of this study, five homogeneous regions in view of topographical and geographically homogeneous aspects except Jeju and Ulreung islands in Korea were accomplished by K-means clustering method. A total of 57 rain gauges were used for the regional frequency analysis with minimum rainfall series for the consecutive durations. Generalized Extreme Value distribution was confirmed as an optimal one among applied distributions. Drought rainfalls following the return periods were estimated by at-site and regional frequency analysis using L-moments method. It was confirmed that the design drought rainfalls estimated by the regional frequency analysis were shown to be more appropriate than those by the at-site frequency analysis. In the second part of this study, LH-moment ratio diagram and the Kolmogorov-Smirnov test on the Gumbel (GUM), Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA) distributions were accomplished to get optimal probability distribution. Design drought rainfalls were estimated by both at-site and regional frequency analysis using LH-moments and GEV distribution, which was confirmed as an optimal one among applied distributions. Design rainfalls were estimated by at-site and regional frequency analysis using LH-moments, the observed and simulated data resulted from Monte Carlotechniques. Design drought rainfalls derived by regional frequency analysis using L1, L2, L3 and L4-moments (LH-moments) method have shown higher reliability than those of at-site frequency analysis in view of RRMSE (Relative Root-Mean-Square Error), RBIAS (Relative Bias) and RR (Relative Reduction) for the estimated design drought rainfalls. Relative efficiency were calculated for the judgment of relative merits and demerits for the design drought rainfalls derived by regional frequency analysis using L-moments and L1, L2, L3 and L4-moments applied in the first report and second report of this study, respectively. Consequently, design drought rainfalls derived by regional frequency analysis using L-moments were shown as more reliable than those using LH-moments. Finally, design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were derived by regional frequency analysis using L-moments, which was confirmed as a more reliable method through this study. Maps for the design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were accomplished by the method of inverse distance weight and Arc-View, which is one of GIS techniques.

키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법 (A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model)

  • 조원진;노상규;윤지영;박진수
    • Asia pacific journal of information systems
    • /
    • 제21권1호
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.