• Title/Summary/Keyword: statistical approach

Search Result 2,355, Processing Time 0.023 seconds

Theoretical Considerations for the Agresti-Coull Type Confidence Interval in Misclassified Binary Data (오분류된 이진자료에서 Agresti-Coull유형의 신뢰구간에 대한 이론적 고찰)

  • Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.4
    • /
    • pp.445-455
    • /
    • 2011
  • Although misclassified binary data occur frequently in practice, the statistical methodology available for the data is rather limited. In particular, the interval estimation of population proportion has relied on the classical Wald method. Recently, Lee and Choi (2009) developed a new confidence interval by applying the Agresti-Coull's approach and showed the efficiency of their proposed confidence interval numerically, but a theoretical justification has not been explored yet. Therefore, a Bayesian model for the misclassified binary data is developed to consider the Agresti-Coull confidence interval from a theoretical point of view. It is shown that the Agresti-Coull confidence interval is essentially a Bayesian confidence interval.

Fault Location and Classification of Combined Transmission System: Economical and Accurate Statistic Programming Framework

  • Tavalaei, Jalal;Habibuddin, Mohd Hafiz;Khairuddin, Azhar;Mohd Zin, Abdullah Asuhaimi
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.6
    • /
    • pp.2106-2117
    • /
    • 2017
  • An effective statistical feature extraction approach of data sampling of fault in the combined transmission system is presented in this paper. The proposed algorithm leads to high accuracy at minimum cost to predict fault location and fault type classification. This algorithm requires impedance measurement data from one end of the transmission line. Modal decomposition is used to extract positive sequence impedance. Then, the fault signal is decomposed by using discrete wavelet transform. Statistical sampling is used to extract appropriate fault features as benchmark of decomposed signal to train classifier. Support Vector Machine (SVM) is used to illustrate the performance of statistical sampling performance. The overall time of sampling is not exceeding 1 1/4 cycles, taking into account the interval time. The proposed method takes two steps of sampling. The first step takes 3/4 cycle of during-fault and the second step takes 1/4 cycle of post fault impedance. The interval time between the two steps is assumed to be 1/4 cycle. Extensive studies using MATLAB software show accurate fault location estimation and fault type classification of the proposed method. The classifier result is presented and compared with well-established travelling wave methods and the performance of the algorithms are analyzed and discussed.

A New Approach to Statistical Analysis of Electrical Fire and Classification of Electrical Fire Causes

  • Kim, Doo-Hyun;Lee, Jong-Ho;Kim, Sung-Chul
    • International Journal of Safety
    • /
    • v.6 no.2
    • /
    • pp.17-21
    • /
    • 2007
  • This paper aims at the statistical analysis of electrical fire and classification of electrical fire causes to collect electrical fires data efficiently. Electrical fire statistics are produced to monitor the number and characteristics of fires attended by fire fighters, including the causes and effects of fire so that action can be taken to reduce the human and financial cost of fire. Electrical fires make up the majority of fires in Korea(including nearly 30% of total fires according to recent figures), The incorrect and biased knowledge for electrical fires changed the classification of certain types of fires, from non-electrical to electrical. It is convenient and required to develop the standardized form that makes, in the assessment of the cause of electrical fires, the fire fighters directly ticking the appropriate box on the fire report form or making an assessment of a text description. Therefore, it is highly recommended to develop electrical fire cause classification and electrical fire assessment on the fire statistics in order to categorize and assess electrical fires exactly. In this paper newly developed electrical fire cause classification structure, which is well-defined hierarchical structure so that there are not any relationship or overlap between cause categories, is suggested. Also fire statistics systems of foreign countries are introduced and compared.

Two-Stage Logistic Regression for Cancer Classi cation and Prediction from Copy-Numbe Changes in cDNA Microarray-Based Comparative Genomic Hybridization

  • Kim, Mi-Jung
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.847-859
    • /
    • 2011
  • cDNA microarray-based comparative genomic hybridization(CGH) data includes low-intensity spots and thus a statistical strategy is needed to detect subtle differences between different cancer classes. In this study, genes displaying a high frequency of alteration in one of the different classes were selected among the pre-selected genes that show relatively large variations between genes compared to total variations. Utilizing copy-number changes of the selected genes, this study suggests a statistical approach to predict patients' classes with increased performance by pre-classifying patients with similar genetic alteration scores. Two-stage logistic regression model(TLRM) was suggested to pre-classify homogeneous patients and predict patients' classes for cancer prediction; a decision tree(DT) was combined with logistic regression on the set of informative genes. TLRM was constructed in cDNA microarray-based CGH data from the Cancer Metastasis Research Center(CMRC) at Yonsei University; it predicted the patients' clinical diagnoses with perfect matches (except for one patient among the high-risk and low-risk classified patients where the performance of predictions is critical due to the high sensitivity and specificity requirements for clinical treatments. Accuracy validated by leave-one-out cross-validation(LOOCV) was 83.3% while other classification methods of CART and DT performed as comparisons showed worse performances than TLRM.

Event date model: a robust Bayesian tool for chronology building

  • Philippe, Lanos;Anne, Philippe
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.2
    • /
    • pp.131-157
    • /
    • 2018
  • We propose a robust event date model to estimate the date of a target event by a combination of individual dates obtained from archaeological artifacts assumed to be contemporaneous. These dates are affected by errors of different types: laboratory and calibration curve errors, irreducible errors related to contaminations, and taphonomic disturbances, hence the possible presence of outliers. Modeling based on a hierarchical Bayesian statistical approach provides a simple way to automatically penalize outlying data without having to remove them from the dataset. Prior information on individual irreducible errors is introduced using a uniform shrinkage density with minimal assumptions about Bayesian parameters. We show that the event date model is more robust than models implemented in BCal or OxCal, although it generally yields less precise credibility intervals. The model is extended in the case of stratigraphic sequences that involve several events with temporal order constraints (relative dating), or with duration, hiatus constraints. Calculations are based on Markov chain Monte Carlo (MCMC) numerical techniques and can be performed using ChronoModel software which is freeware, open source and cross-platform. Features of the software are presented in Vibet et al. (ChronoModel v1.5 user's manual, 2016). We finally compare our prior on event dates implemented in the ChronoModel with the prior in BCal and OxCal which involves supplementary parameters defined as boundaries to phases or sequences.

QFD Applied to Road Traffic Accident Management by Police Station (경찰서별 도로교통사고 관리를 위한 품질기능전개의 적용)

  • Son, So-Yeong;Choi, Hong
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.3
    • /
    • pp.21-30
    • /
    • 1999
  • One of the major tasks of a Police station is the management of road traffic accidents. Each police station is responsible for keeping Traffic Accident Records (TAR) which can be used as the basis of statistical analyses. Results of such statistical analyses have been applied to establishing effective traffic Plans and safety Policies at the macro level. In this Paper, we apply QFD in a way that each police station can set and implement specific policies according to the local characteristics. Cluster analysis is employed to find black spots in each local area. Poisson repression is used to identify the area specific factors related to various types of road accidents. Results of such statistical analyses are applied to QFD. Our approach is expected to contribute to reduce various types of area specific road traffic accidents.

  • PDF

Automatic Generation of Multiple-Choice Questions Based on Statistical Language Model (통계 언어모델 기반 객관식 빈칸 채우기 문제 생성)

  • Park, Youngki
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.2
    • /
    • pp.197-206
    • /
    • 2016
  • A fill-in-the-blank with choices are widely used in classrooms in order to check whether students' understand what is being taught. Although there have been proposed many algorithms for generating this type of questions, most of them focus on preparing sentences with blanks rather than generating multiple choices. In this paper, we propose a novel algorithm for generating multiple choices, given a sentence with a blank. Because the algorithm is based on a statistical language model, we can generate relatively unbiased result and adjust the level of difficulty with ease. The experimental results show that our approach automatically produces similar multiple-choices to those of the exam writers.

Functional Data Classification of Variable Stars

  • Park, Minjeong;Kim, Donghoh;Cho, Sinsup;Oh, Hee-Seok
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.4
    • /
    • pp.271-281
    • /
    • 2013
  • This paper considers a problem of classification of variable stars based on functional data analysis. For a better understanding of galaxy structure and stellar evolution, various approaches for classification of variable stars have been studied. Several features that explain the characteristics of variable stars (such as color index, amplitude, period, and Fourier coefficients) were usually used to classify variable stars. Excluding other factors but focusing only on the curve shapes of variable stars, Deb and Singh (2009) proposed a classification procedure using multivariate principal component analysis. However, this approach is limited to accommodate some features of the light curve data that are unequally spaced in the phase domain and have some functional properties. In this paper, we propose a light curve estimation method that is suitable for functional data analysis, and provide a classification procedure for variable stars that combined the features of a light curve with existing functional data analysis methods. To evaluate its practical applicability, we apply the proposed classification procedure to the data sets of variable stars from the project STellar Astrophysics and Research on Exoplanets (STARE).

Wage Determinants Analysis by Quantile Regression Tree

  • Chang, Young-Jae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.2
    • /
    • pp.293-301
    • /
    • 2012
  • Quantile regression proposed by Koenker and Bassett (1978) is a statistical technique that estimates conditional quantiles. The advantage of using quantile regression is the robustness in response to large outliers compared to ordinary least squares(OLS) regression. A regression tree approach has been applied to OLS problems to fit flexible models. Loh (2002) proposed the GUIDE algorithm that has a negligible selection bias and relatively low computational cost. Quantile regression can be regarded as an analogue of OLS, therefore it can also be applied to GUIDE regression tree method. Chaudhuri and Loh (2002) proposed a nonparametric quantile regression method that blends key features of piecewise polynomial quantile regression and tree-structured regression based on adaptive recursive partitioning. Lee and Lee (2006) investigated wage determinants in the Korean labor market using the Korean Labor and Income Panel Study(KLIPS). Following Lee and Lee, we fit three kinds of quantile regression tree models to KLIPS data with respect to the quantiles, 0.05, 0.2, 0.5, 0.8, and 0.95. Among the three models, multiple linear piecewise quantile regression model forms the shortest tree structure, while the piecewise constant quantile regression model has a deeper tree structure with more terminal nodes in general. Age, gender, marriage status, and education seem to be the determinants of the wage level throughout the quantiles; in addition, education experience appears as the important determinant of the wage level in the highly paid group.

A Statistical Approach to Examine the Impact of Various Meteorological Parameters on Pan Evaporation

  • Pandey, Swati;Kumar, Manoj;Chakraborty, Soubhik;Mahanti, N.C.
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.515-530
    • /
    • 2009
  • Evaporation from surface water bodies is influenced by a number of meteorological parameters. The rate of evaporation is primarily controlled by incoming solar radiation, air and water temperature and wind speed and relative humidity. In the present study, influence of weekly meteorological variables such as air temperature, relative humidity, bright sunshine hours, wind speed, wind velocity, rainfall on rate of evaporation has been examined using 35 years(1971-2005) of meteorological data. Statistical analysis was carried out employing linear regression models. The developed regression models were tested for goodness of fit, multicollinearity along with normality test and constant variance test. These regression models were subsequently validated using the observed and predicted parameter estimates with the meteorological data of the year 2005. Further these models were checked with time order sequence of residual plots to identify the trend of the scatter plot and then new standardized regression models were developed using standardized equations. The highest significant positive correlation was observed between pan evaporation and maximum air temperature. Mean air temperature and wind velocity have highly significant influence on pan evaporation whereas minimum air temperature, relative humidity and wind direction have no such significant influence.