• Title/Summary/Keyword: Statistics Matching

Search Result 190, Processing Time 0.03 seconds

Evaluation of the Input Status of Exposure-related Information of Working Environment Monitoring Database and Special Health Examination Database for the Construction of a National Exposure Surveillance System (국가노출감시체계 구축을 위한 작업환경측정과 특수건강진단 자료의 노출 정보 입력 실태 평가)

  • Choi, Sangjun;Koh, Dong-Hee;Park, Ju-Hyun;Park, Donguk;Kim, Hwan-Cheol;Lim, Dae Sung;Sung, Yeji;Ko, Kyoung Yoon;Lim, Ji Seon;Seo, Hoekyeong
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.32 no.3
    • /
    • pp.231-241
    • /
    • 2022
  • Objectives: The purpose of this study is to evaluate the input status of exposure-related information in the working environment monitoring database (WEMD) and special health examination database (SHED) for the construction of a national exposure surveillance system. Methods: The industrial and process code input status of WEMD and SHED for 21 carcinogens from 2014 to 2016 was compared. Data from workers who performed both work environment monitoring and special health examinations in 2019 and 2020 were extracted and the actual status of input of industrial and process codes was analyzed. We also investigated the cause of input errors through a focus group interview with 12 data input specialists. Results: As a result of analyzing WMED and SHED for 21 carcinogens, the five-digit industrial code matching rate was low at 53.5% and the process code matching rate was 19% or less. Among the data that simultaneously conducted work environment monitoring and special health examination in 2019 and 2020, the process code matching rate was very low at 18.1% and 5.2%, respectively. The main causes of exposure-related data input errors were the difference between the WEMD and SHED process code input systems from 2020, the number of standard process and job codes being too large, and the inefficiency of the standard code search system. Conclusions: In order to use WEMD and SHED as a national surveillance system, it is necessary to simplify the number of standard code input codes and improve the search system efficiency.

Bayes Risk Comparison for Non-Life Insurance Risk Estimation (손해보험 위험도 추정에 대한 베이즈 위험 비교 연구)

  • Kim, Myung Joon;Woo, Ho Young;Kim, Yeong-Hwa
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.1017-1028
    • /
    • 2014
  • Well-known Bayes and empirical Bayes estimators have a disadvantage in respecting to overshink the parameter estimator error; therefore, a constrained Bayes estimator is suggested by matching the first two moments. Also traditional loss function such as mean square error loss function only considers the precision of estimation and to consider both precision and goodness of fit, balanced loss function is suggested. With these reasons, constrained Bayes estimators under balanced loss function is recommended for non-life insurance pricing.; however, most studies focus on the performance of estimation since Bayes risk of newly suggested estimators such as constrained Bayes and constrained empirical Bayes estimators under specific loss function is difficult to derive. This study compares the Bayes risk of several Bayes estimators under two different loss functions for estimating the risk in the auto insurance business and indicates the effectiveness of the newly suggested Bayes estimators with regards to Bayes risk perspective through auto insurance real data analysis.

Adaptive Error Detection Using Causal Block Boundary Matching in Block-Coded Video (블록기반 부호화 비디오에서 인과적 블록 경계정합을 이용한 적응적 오류 검출)

  • 주용수;김태식;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1125-1132
    • /
    • 2004
  • In this Paper, we Propose an effective boundary matching based error detection algorithm using causal neighbor blocks to improve video quality degraded from channel error in block-coded video. The proposed algorithm first calculates boundary mismatch powers between a current block and each of its causal neighbor blocks. It then decides that a current block should be normal if all the mismatch powers are less than an adaptive threshold, which is adaptively determined using the statistics of the two adjacent blocks. In some experiments under the environment of 16bi1s burst error at bit error rates (BERs) of 10$^{-3}$ -10$^{-4}$ , it is shown that the proposed algorithm yields the improvements of maximum 20% in error detection rate and of maximum 3.5㏈ in PSNR of concealed kames, compared with Zeng's error detection algorithm.

Motion Estimation in Video Coding using Search Candidate Point on Region by Binary-Tree Structure (이진트리 구조에 따른 구간별 탐색 후보점을 이용한 비디오 코딩의 움직임 추정)

  • Kwak, Sung-Keun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.1
    • /
    • pp.402-410
    • /
    • 2013
  • In this paper, we propose a new fast block matching algorithm for block matching using the temporal and spatially correlation of the video sequence and local statistics of neighboring motion vectors. Since the temporal correlation of the video sequence between the motion vector of current block and the motion vector of previous block. The proposed algorithm determines the location of a better starting point for the search of an exact motion vector using the point of the smallest SAD(sum of absolute difference) value by the predicted motion vectors of neighboring blocks around the same block of the previous frame and the current frame and the predictor candidate point on each division region by binary-tree structure. Experimental results show that the proposed algorithm has the capability to dramatically reduce the search points and computing cost for motion estimation, comparing to fast FS(full search) motion estimation and other fast motion estimation.

Study on the EDA based Statistics Attributes Discovery and Utilization for the Maritime Safety Statistics Items Diversification (해상안전 통계 항목 다양화를 위한 EDA 기반 통계 속성 도출 및 활용에 관한 연구)

  • Kang, Seong Kyung;Lee, Young Jai
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.7
    • /
    • pp.798-809
    • /
    • 2020
  • Evidence-based policymaking and assessments for scientific administration have increased the importance of statistics (data) utilization. Statistics can explain specific phenomena by providing numerical values and are a public resource for national decision making. Due to these inherent attributes, statistics are utilized as baseline and base data for government policy determinations and the analysis of various phenomena. However, compared to the importance, the role of statistics is limited, and statistics are often used as simple abstracts, produced mainly for suppliers, not for consumers' perspectives to create value. This study explores the statistical data and other attributes that can be utilized for policies or research to address the problems mentioned above. The baseline statistical data used in this study is from the Maritime Distress Accident Statistical Yearbook published by the South Korean Coast Guard, and other additional attributes are from text analyses of vessel casualty situation reports from the South Korean Maritime Police. Collecting 56 attributes drawn from the text analysis and executing an EDA resulted in 88 attribute unions: 18 attribute unions had a satisfactory significance probability (p-value < .05) and a strong correlation coefficient above 0.7, and 70 attribute unions had a middle correlation. (over 0.4 and under 0.7). Additionally, to utilize the extra attributes discovered from the EDA politically, a keyword analysis for each detailed strategy of the disaster Preparation basic plan was executed, the utilization availability of the attributes was obtained using a matching process of keywords, and the EDA deducted attributes were examined.

Propensity score methods for estimating treatment delay effects (생존자료분석에서 성향 점수를 이용한 treatment delay effect 추정법에 대한 연구)

  • Jooyi Jung;Hyunjin Song;Seungbong Han
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.415-445
    • /
    • 2023
  • Oftentimes, the time dependent treatment covariate and the time dependent confounders exist in observation studies. It is an important problem to correctly adjust for the time dependent confounders in the propensity score analysis. Recently, In the survival data, Hade et al. (2020) used a propensity score matching method to correctly estimate the treatment delay effect when the time dependent confounder affects time to the treatment time, where the treatment delay effects is defined to the delay in treatment reception. In this paper, we proposed the Cox model based marginal structural model (Cox-MSM) framework to estimate the treatment delay effect and conducted extensive simulation studies to compare our proposed Cox-MSM with the propensity score matching method proposed by Hade et al. (2020). Our simulation results showed that the Cox-MSM leads to more exact estimate for the treatment delay effect compared with two sequential matching schemes based on propensity scores. Example from study in treatment discontinuation in conjunction with simulated data illustrates the practical advantages of the proposed Cox-MSM.

A Study on the Application of Constrained Bayes Estimation for Product Quality Control (Constrained 베이즈 추정방식의 제품 품질관리 활용방안에 관한 연구)

  • Kim, Tai-Kyoo;Kim, Myung Joon
    • Journal of Korean Society for Quality Management
    • /
    • v.43 no.1
    • /
    • pp.57-66
    • /
    • 2015
  • Purpose: The purpose of this study is to apply the constrained Bayesian estimation methodology for product quality control process and prove the effectiveness of the product management by comparing with the well-known Bayes estimator through data performance result. Methods: The Bayes and constrained Bayes estimators were produced based on the theoretical background and for confirming the effectiveness of suggested application, the deviation index was defined and calculated for the comparison. Results: The statistical analysis result shows that applying the suggested estimation methodology, that is, constrained Bayes estimator improves the effectiveness of the index with regard to reduce the error by matching the first two empirical moments. Conclusion: Considering the advanced Bayesian approaches such as constrained Bayes estimation for the product quality control process, the newly defined deviation index reduces the error for estimating the parameter histogram which is reflected both location and deviation parameters and furthermore various Bayesian perspective approaches seems to be meaningful for managing the product quality control process.

Class Knowledge-oriented Automatic Land Use and Land Cover Change Detection

  • Jixian, Zhang;Yu, Zeng;Guijun, Yang
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.47-49
    • /
    • 2003
  • Automatic land use and land cover change (LUCC) detection via remotely sensed imagery has a wide application in the area of LUCC research, nature resource and environment monitoring and protection. Under the condition that one time (T1) data is existed land use and land cover maps, and another time (T2) data is remotely sensed imagery, how to detect change automatically is still an unresolved issue. This paper developed a land use and land cover class knowledge guided method for automatic change detection under this situation. Firstly, the land use and land cover map in T1 and remote sensing images in T2 were registered and superimposed precisely. Secondly, the remotely sensed knowledge database of all land use and land cover classes was constructed based on the unchanged parcels in T1 map. Thirdly, guided by T1 land use and land cover map, feature statistics for each parcel or pixel in RS images were extracted. Finally, land use and land cover changes were found and the change class was recognized through the automatic matching between the knowledge database of remote sensing information of land use & land cover classes and the extracted statistics in that parcel or pixel. Experimental results and some actual applications show the efficiency of this method.

  • PDF

Estimating Average Causal Effect in Latent Class Analysis (잠재범주분석을 이용한 원인적 영향력 추론에 관한 연구)

  • Park, Gayoung;Chung, Hwan
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.7
    • /
    • pp.1077-1095
    • /
    • 2014
  • Unlike randomized trial, statistical strategies for inferring the unbiased causal relationship are required in the observational studies. Recently, new methods for the causal inference in the observational studies have been proposed such as the matching with the propensity score or the inverse probability treatment weighting. They have focused on how to control the confounders and how to evaluate the effect of the treatment on the result variable. However, these conventional methods are valid only when the treatment variable is categorical and both of the treatment and the result variables are directly observable. Research on the causal inference can be challenging in part because it may not be possible to directly observe the treatment and/or the result variable. To address this difficulty, we propose a method for estimating the average causal effect when both of the treatment and the result variables are latent. The latent class analysis has been applied to calculate the propensity score for the latent treatment variable in order to estimate the causal effect on the latent result variable. In this work, we investigate the causal effect of adolescents delinquency on their substance use using data from the 'National Longitudinal Study of Adolescent Health'.

Latent causal inference using the propensity score from latent class regression model (잠재범주회귀모형의 성향점수를 이용한 잠재변수의 원인적 영향력 추론 연구)

  • Lee, Misol;Chung, Hwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.615-632
    • /
    • 2017
  • Unlike randomized trial, statistical strategies for inferring the unbiased causal relationship are required in the observational studies. The matching with the propensity score is one of the most popular methods to control the confounders in order to evaluate the effect of the treatment on the outcome variable. Recently, new methods for the causal inference in latent class analysis (LCA) have been proposed to estimate the average causal effect (ACE) of the treatment on the latent discrete variable. They have focused on the application study for the real dataset to estimate the ACE in LCA. In practice, however, the true values of the ACE are not known, and it is difficult to evaluate the performance of the estimated the ACE. In this study, we propose a method to generate a synthetic data using the propensity score in the framework of LCA, where treatment and outcome variables are latent. We then propose a new method for estimating the ACE in LCA and evaluate its performance via simulation studies. Furthermore we present an empirical analysis based on data form the 'National Longitudinal Study of Adolescents Health,' where puberty as a latent treatment and substance use as a latent outcome variable.