• 제목/요약/키워드: Statistical Procedures

검색결과 695건 처리시간 0.027초

K-FILE과 초음파 기구의 도말층제거효과에 대한 주사전자 현미경적 연구 (A SCANNING ELECTRON MICROSCOPIC STUDY ON THE REMOVING EFFICIENCY OF SMEAR LAYER BY K-FILE AND ULTRASONIC INSTRUMENT)

  • 이수종;임미경
    • Restorative Dentistry and Endodontics
    • /
    • 제19권1호
    • /
    • pp.97-105
    • /
    • 1994
  • The purpose of this study was to evaluate the smear layer removing efficiency of two root canal preparation techniques. Twelve single-rooted teeth were used in two groups of six each. Group 1 was biomechanically prepared by hand using a K-file with a high volume of normal saline irrigation. Group 2 was. prepared by using ultrasonically activated K-file with a constant high volume of normal saline irrigation. After the experimental procedures, each root was split saggitally. The removing efficiency of the preparation methods were assessed in terms of surface condition of the canal walls at three levels, those coronal, middle, and apical thirds. On the basis of remaining debris, presence of smear layer, and patency of dentinal tubules, each canal was evaluated according to a scale form 0 to 2. A statistical analysis was used to indicated any significant differences in surface condition between the two methods. There was no statistical significance between hand instrumentation and ultrasonic instrumentation at the cervical third but removing efficiency of ultrasonic instrumentation was superior. No statistically significant differences were obhserved for middle or apical third.

  • PDF

A SELECTION PROCEDURE FOR GOOD LOGISTICS POPULATIONS

  • Singh, Parminder;Gill, A.N.
    • Journal of the Korean Statistical Society
    • /
    • 제32권3호
    • /
    • pp.299-309
    • /
    • 2003
  • Let ${\pi}_1,...,{\pi}_{k}$k($\geq$2) independent logistic populations such that the cumulative distribution function (cdf) of an observation from the population ${\pi}_{i}$ is $$F_{i}\;=\; {\frac{1}{1+exp{-\pi(x-{\mu}_{i})/(\sigma\sqrt{3})}}},\;$\mid$x$\mid$<\;{\infty}$$ where ${\mu}_{i}(-{\infty}\; < \; {\mu}_{i}\; <\; {\infty}$ is unknown location mean and ${\delta}^2$ is known variance, i = 1,..., $textsc{k}$. Let ${\mu}_{[k]}$ be the largest of all ${\mu}$'s and the population ${\pi}_{i}$ is defined to be 'good' if ${\mu}_{i}\;{\geq}\;{\mu}_{[k]}\;-\;{\delta}_1$, where ${\delta}_1\;>\;0$, i = 1,...,$textsc{k}$. A selection procedure based on sample median is proposed to select a subset of $textsc{k}$ logistic populations which includes all the good populations with probability at least $P^{*}$(a preassigned value). Simultaneous confidence intervals for the differences of location parameters, which can be derived with the help of proposed procedures, are discussed. If a population with location parameter ${\mu}_{i}\;<\;{\mu}_{[k]}\;-\;{\delta}_2({\delta}_2\;>{\delta}_1)$, i = 1,...,$textsc{k}$ is considered 'bad', a selection procedure is proposed so that the probability of either selecting a bad population or omitting a good population is at most 1­ $P^{*}$.

Identification of the associations between genes and quantitative traits using entropy-based kernel density estimation

  • Yee, Jaeyong;Park, Taesung;Park, Mira
    • Genomics & Informatics
    • /
    • 제20권2호
    • /
    • pp.17.1-17.11
    • /
    • 2022
  • Genetic associations have been quantified using a number of statistical measures. Entropy-based mutual information may be one of the more direct ways of estimating the association, in the sense that it does not depend on the parametrization. For this purpose, both the entropy and conditional entropy of the phenotype distribution should be obtained. Quantitative traits, however, do not usually allow an exact evaluation of entropy. The estimation of entropy needs a probability density function, which can be approximated by kernel density estimation. We have investigated the proper sequence of procedures for combining the kernel density estimation and entropy estimation with a probability density function in order to calculate mutual information. Genotypes and their interactions were constructed to set the conditions for conditional entropy. Extensive simulation data created using three types of generating functions were analyzed using two different kernels as well as two types of multifactor dimensionality reduction and another probability density approximation method called m-spacing. The statistical power in terms of correct detection rates was compared. Using kernels was found to be most useful when the trait distributions were more complex than simple normal or gamma distributions. A full-scale genomic dataset was explored to identify associations using the 2-h oral glucose tolerance test results and γ-glutamyl transpeptidase levels as phenotypes. Clearly distinguishable single-nucleotide polymorphisms (SNPs) and interacting SNP pairs associated with these phenotypes were found and listed with empirical p-values.

Adjusting for Confounders in Outcome Studies Using the Korea National Health Insurance Claim Database: A Review of Methods and Applications

  • Seung Jin Han;Kyoung Hoon Kim
    • Journal of Preventive Medicine and Public Health
    • /
    • 제57권1호
    • /
    • pp.1-7
    • /
    • 2024
  • Objectives: Adjusting for potential confounders is crucial for producing valuable evidence in outcome studies. Although numerous studies have been published using the Korea National Health Insurance Claim Database, no study has critically reviewed the methods used to adjust for confounders. This study aimed to review these studies and suggest methods and applications to adjust for confounders. Methods: We conducted a literature search of electronic databases, including PubMed and Embase, from January 1, 2021 to December 31, 2022. In total, 278 studies were retrieved. Eligibility criteria were published in English and outcome studies. A literature search and article screening were independently performed by 2 authors and finally, 173 of 278 studies were included. Results: Thirty-nine studies used matching at the study design stage, and 171 adjusted for confounders using regression analysis or propensity scores at the analysis stage. Of these, 125 conducted regression analyses based on the study questions. Propensity score matching was the most common method involving propensity scores. A total of 171 studies included age and/or sex as confounders. Comorbidities and healthcare utilization, including medications and procedures, were used as confounders in 146 and 82 studies, respectively. Conclusions: This is the first review to address the methods and applications used to adjust for confounders in recently published studies. Our results indicate that all studies adjusted for confounders with appropriate study designs and statistical methodologies; however, a thorough understanding and careful application of confounding variables are required to avoid erroneous results.

한국어판 플러턴 어드밴스드 균형 척도의 신뢰도와 타당도 연구 (Reliability and Validity Study on the Korean Version of the Fullerton Advanced Balance Scale)

  • 김경모
    • 한국전문물리치료학회지
    • /
    • 제23권1호
    • /
    • pp.31-37
    • /
    • 2016
  • Background: The assessment tool developed in other countries should be translated into Korean language using rigorous methodological approaches in order to be used in Korea. Because these procedures are insufficient for establishing the cross-cultural and linguistic equivalence, the need for statistical methods is raised. The Fullerton Advanced Balance Scale was translated into Korean and the content validity was verified through the back translation method, but the reliability and validity have not yet been proven by statistical methods. Objects: The purpose of this study was to investigate the reliability and validity of the Korean version of the Fullerton Advanced Balance Scale (KFAB) by statistical methods in elderly people. Methods: A total of 97 elderly adults (39 males and 58 females) participated in this study. Internal consistency of the KFAB was measured using Cronbach's alpha and an intraclass correlation coefficient (ICC) was used to assess test-retest reliability between the two measurement sessions. Concurrent validity was measured by comparing the KFAB responses with the Korean version of the Berg Balance Scale (KBBS) using the Spearman correlation coefficient. Construct validity of the KFAB was measured using the exploratory factor analysis to evaluate the unidimensionality of the questionnaire. The significance level was set at ${\alpha}=.05$. Results: The internal consistency of the KFAB was found be adequate with Cronbach's alpha (.96), and test-retest reliability was excellent as evidenced by the high ICC (r=.996). Concurrent validity showed high correlation between the KFAB and KBBS (r=.89, p<.001). Construct validity was evaluated using exploratory factor analysis. The result from Bartlett test of sphericity was statistically significant (p<.001), and the value of Kaiser-Meyer-Olkin measure of sampling adequacy was .93. Exploratory factor analysis revealed the existence of only one dominant factor that explained 76.43% of the variance. Conclusion: The KFAB is a reliable, valid and appropriate tool for measuring the balance functions in elderly people.

Capabilities of stochastic response surface method and response surface method in reliability analysis

  • Jiang, Shui-Hua;Li, Dian-Qing;Zhou, Chuang-Bing;Zhang, Li-Min
    • Structural Engineering and Mechanics
    • /
    • 제49권1호
    • /
    • pp.111-128
    • /
    • 2014
  • The stochastic response surface method (SRSM) and the response surface method (RSM) are often used for structural reliability analysis, especially for reliability problems with implicit performance functions. This paper aims to compare these two methods in terms of fitting the performance function, accuracy and efficiency in estimating probability of failure as well as statistical moments of system output response. The computational procedures of two response surface methods are briefly introduced first. Then their capabilities are demonstrated and compared in detail through two examples. The results indicate that the probability of failure mainly reflects the accuracy of the response surface function (RSF) fitting the performance function in the vicinity of the design point, while the statistical moments of system output response reflect the accuracy of the RSF fitting the performance function in the entire space. In addition, the performance function can be well fitted by the SRSM with an optimal order polynomial chaos expansion both in the entire physical and in the independent standard normal spaces. However, it can be only well fitted by the RSM in the vicinity of the design point. For reliability problems involving random variables with approximate normal distributions, such as normal, lognormal, and Gumbel Max distributions, both the probability of failure and statistical moments of system output response can be accurately estimated by the SRSM, whereas the RSM can only produce the probability of failure with a reasonable accuracy.

연립방정식 모형의 계수조건 검정법 제안 (A Test of the Rank Conditions in the Simultaneous Equation Models)

  • 소선하;박유성;이동희
    • Communications for Statistical Applications and Methods
    • /
    • 제16권1호
    • /
    • pp.115-125
    • /
    • 2009
  • 경영.경제분야에서 사용되는 모형 가운데 연립방정식 모형은 모형 내에서 결정되는 내생변수와 모형 외부로부터 결정된 외생변수들로 구성된 M개의 방정식과 T개의 관찰치로 이루어진 회귀방정식체계이며, 모형에 대한 모수식별 및 유일해의 존재여부에 대한 결정방법으로 순서조건과 계수조건이 있다. 그러나 대부분 연립방정식 모형이 이들 조건을 만족한다는 가정하에서 모수들을 추정하기 때문에 추정값이 비효율적이거나, 유일한 모수 추정값이 존재하지 않는 경우가 이들 조건에 따라 발생할 수 있다. 본 연구에서는 순서조건을 만족한다는 가정 하에서 계수조건의 충족여부를 검정하기 위한 검정통계량을 새롭게 제시하고 이의 근사분포를 도출하였으며, 이와 함께 모의 실험을 통하여 제안한 검정통계량의 검정력을 살펴보았다.

통계적 상세화 기법을 통한 기후변화기반 지속시간별 연최대 대표 강우시나리오 생산기법 소개 (Introduction to the production procedure of representative annual maximum precipitation scenario for different durations based on climate change with statistical downscaling approaches)

  • 이태삼
    • 한국수자원학회논문집
    • /
    • 제51권spc1호
    • /
    • pp.1057-1066
    • /
    • 2018
  • 기후변화는 홍수의 가장 큰 원인이 되는 극치강우의 빈도와 크기에 매우 큰 영향을 미치고 있다. 특히, 우리나라에서 발생하는 대규모 재해는 강우에 의한 홍수피해가 대부분을 차지하고 있다. 이러한 홍수피해는 기후변화에 의한 극한강우의 발생 빈도가 높아짐에 따라 새로운 재해양상으로 전개되고 있다. 하지만, 미래 기후변화 시나리오 자료는 해상도의 한계로 인하여 중소규모 하천 및 도시유역에 요구되는 수준의 자료 수집이 불가능한 상태이다. 이러한 문제점을 개선하기 위하여 본 연구에서는 전지구모형에서 생산된 기후변화 시나리오에 대해서 여러 단계의 통계적 상세화 기법을 통하여 우리나라 전역에 대하여 미래 시나리오에 대한 빈도해석이 가능하도록 각 지점의 특성에 따라 시간적으로 상세화하기 위해 개발된 방법 및 과정을 소개하였다. 이를 통해, 시간상세화 자료를 토대로 미래 강우에 대한 빈도해석과 기후변화에 따른 방재성능 목표강우량을 산정하는데 활용할 수 있도록 하였다.

의약품 설계 기반 품질 고도화(QbD)를 위한 QbD 6시그마 체계 구축에 관한 연구 (A Study on the Build of a QbD Six Sigma System to Promote Quality Improvement(QbD) Based on Drug Design)

  • 김강희;김현정
    • 품질경영학회지
    • /
    • 제50권3호
    • /
    • pp.373-386
    • /
    • 2022
  • Purpose: This study proposes the application of Six Sigma management innovation method for more systematically enhanced execution of Quality by Design (QbD) activities. QbD requires a deeper understanding of the product and process at the design and development stage of the drug, and it is very important to ensure that no fault is fundamentally generated through thorough process control. Methods: Analyzing the background and specific procedures of quality improvement based on the drug design basis, and analyzing the key contents of each step, we have differentated and common points from the 6 Sigma methodology. We propose a new model of Six Sigma management innovation method suitable for pharmaceutical industry. Results: Regulatory agencies are demanding results from statistical analysis as a scientific basis in developing medicines to treat human life through quality improvement activities based on drug design. By utilizing the education system to improve the statistical analysis capacity in the Six Sigma activities and operating the 6 Sigma Belt system in conjunction, it helped systematically strengthen the execution power of quality improvement activities based on pharmaceutical design based on the members of the pharmaceutical industry. Conclusion: By using QbD Six Sigma, which combines quality enhancement based on pharmaceutical design basis and Six Sigma methodology suitable for pharmaceutical industry, it is possible to obtain satisfactory results both by pharmaceutical companies and regulators by using appropriate statistical analysis methods for preparing scientific evidence data required by regulatory.

Estimating the unconfined compression strength of low plastic clayey soils using gene-expression programming

  • Muhammad Naqeeb Nawaz;Song-Hun Chong;Muhammad Muneeb Nawaz;Safeer Haider;Waqas Hassan;Jin-Seop Kim
    • Geomechanics and Engineering
    • /
    • 제33권1호
    • /
    • pp.1-9
    • /
    • 2023
  • The unconfined compression strength (UCS) of soils is commonly used either before or during the construction of geo-structures. In the pre-design stage, UCS as a mechanical property is obtained through a laboratory test that requires cumbersome procedures and high costs from in-situ sampling and sample preparation. As an alternative way, the empirical model established from limited testing cases is used to economically estimate the UCS. However, many parameters affecting the 1D soil compression response hinder employing the traditional statistical analysis. In this study, gene expression programming (GEP) is adopted to develop a prediction model of UCS with common affecting soil properties. A total of 79 undisturbed soil samples are collected, of which 54 samples are utilized for the generation of a predictive model and 25 samples are used to validate the proposed model. Experimental studies are conducted to measure the unconfined compression strength and basic soil index properties. A performance assessment of the prediction model is carried out using statistical checks including the correlation coefficient (R), the root mean square error (RMSE), the mean absolute error (MAE), the relatively squared error (RSE), and external criteria checks. The prediction model has achieved excellent accuracy with values of R, RMSE, MAE, and RSE of 0.98, 10.01, 7.94, and 0.03, respectively for the training data and 0.92, 19.82, 14.56, and 0.15, respectively for the testing data. From the sensitivity analysis and parametric study, the liquid limit and fine content are found to be the most sensitive parameters whereas the sand content is the least critical parameter.