• Title/Summary/Keyword: statistical approach

Search Result 2,344, Processing Time 0.029 seconds

Star Visibility Analysis for a Low Earth Orbit Satellite

  • Yim, Jo-Ryeong;Lee, Seon-Ho;Yong, Ki-Lyuk
    • Bulletin of the Korean Space Science Society
    • /
    • 2008.10a
    • /
    • pp.28.2-28.2
    • /
    • 2008
  • Recently, star sensors have been successfully used as main attitude sensors for attitude control in many satellites. This research presents the star visibility analysis for star trackers and the goal of this analysis is to make sure that the star tracker implementation is suitable to the mission profile and scenario and satisfies the requirement of attitude orbit control system. As a main optical attitude sensor imaging stars, accomodations of a star tracker should be optimized in order to improve the probability of the usage by avoiding the blinding (the unavailability) by the Sun and the Earth. For the analysis, a statistical approach and a time simulation approach are used. The statistical approach is based on the generation of numerous cases, to derive relevant statistics about Earth and Sun proximity probabilites for different lines of sight. The time simulation approach is performed for one orbit to check the statistical result and to refine the statistical result and accomodations of star trackers. In order to perform simulations first of all, an orbit and specific mission profiles of a satellite are set, next the earth proximity probability and the sun proximity probability are calculated by considering the attitude maneuvers and the geometry of the orbit, and then finally the unavailability positions are estimated. As a result, the optimized accomodations of two star trackers are suggested for the low earth orbit satellite.

  • PDF

A new Bayesian approach to derive Paris' law parameters from S-N curve data

  • Prabhu, Sreehari Ramachandra;Lee, Young-Joo;Park, Yeun Chul
    • Structural Engineering and Mechanics
    • /
    • v.69 no.4
    • /
    • pp.361-369
    • /
    • 2019
  • The determination of Paris' law parameters based on crack growth experiments is an important procedure of fatigue life assessment. However, it is a challenging task because it involves various sources of uncertainty. This paper proposes a novel probabilistic method, termed the S-N Paris law (SNPL) method, to quantify the uncertainties underlying the Paris' law parameters, by finding the best estimates of their statistical parameters from the S-N curve data using a Bayesian approach. Through a series of steps, the SNPL method determines the statistical parameters (e.g., mean and standard deviation) of the Paris' law parameters that will maximize the likelihood of observing the given S-N data. Because the SNPL method is based on a Bayesian approach, the prior statistical parameters can be updated when additional S-N test data are available. Thus, information on the Paris' law parameters can be obtained with greater reliability. The proposed method is tested by applying it to S-N curves of 40H steel and 20G steel, and the corresponding analysis results are in good agreement with the experimental observations.

Credit Score Modelling in A Two-Phase Mathematical Programming (두 단계 수리계획 접근법에 의한 신용평점 모델)

  • Sung Chang Sup;Lee Sung Wook
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2002.05a
    • /
    • pp.1044-1051
    • /
    • 2002
  • This paper proposes a two-phase mathematical programming approach by considering classification gap to solve the proposed credit scoring problem so as to complement any theoretical shortcomings. Specifically, by using the linear programming (LP) approach, phase 1 is to make the associated decisions such as issuing grant of credit or denial of credit to applicants. or to seek any additional information before making the final decision. Phase 2 is to find a cut-off value, which minimizes any misclassification penalty (cost) to be incurred due to granting credit to 'bad' loan applicant or denying credit to 'good' loan applicant by using the mixed-integer programming (MIP) approach. This approach is expected to and appropriate classification scores and a cut-off value with respect to deviation and misclassification cost, respectively. Statistical discriminant analysis methods have been commonly considered to deal with classification problems for credit scoring. In recent years, much theoretical research has focused on the application of mathematical programming techniques to the discriminant problems. It has been reported that mathematical programming techniques could outperform statistical discriminant techniques in some applications, while mathematical programming techniques may suffer from some theoretical shortcomings. The performance of the proposed two-phase approach is evaluated in this paper with line data and loan applicants data, by comparing with three other approaches including Fisher's linear discriminant function, logistic regression and some other existing mathematical programming approaches, which are considered as the performance benchmarks. The evaluation results show that the proposed two-phase mathematical programming approach outperforms the aforementioned statistical approaches. In some cases, two-phase mathematical programming approach marginally outperforms both the statistical approaches and the other existing mathematical programming approaches.

  • PDF

Design and Weighting Effects in Small Firm Server in Korea

  • Lee, Keejae;Lepkowski, James M.
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.775-786
    • /
    • 2002
  • In this paper, we conducted an empirical study to investigate the design and weighting effects on descriptive and analytic statistics. The design and weighting effects were calculated for estimates produced from the 1998 small firm survey data. We considered the design and weighting effects on coefficients estimates of regression model using the design-based approach and the GEE approach.

On the Logistic Regression Diagnostics

  • Kim, Choong-Rak;Jeong, Kwang-Mo
    • Journal of the Korean Statistical Society
    • /
    • v.22 no.1
    • /
    • pp.27-37
    • /
    • 1993
  • Since the analytic expression for a diagnostic in the logistic regression model is not available, one-step estimation is often used by a case-deletion point of view. In this paper, infinitesimal perturbation approach is used, and it is shown that the scale transformation of infinitesimal perturbation approach is eventually equal to the weighted perturbation of local influence approach and the replacement measure. Also, multiple cases deletion for the masking effect is considered.

  • PDF

A BAYESIAN METHOD FOR FINDING MINIMUM GENERALIZED VARIANCE AMONG K MULTIVARIATE NORMAL POPULATIONS

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.32 no.4
    • /
    • pp.411-423
    • /
    • 2003
  • In this paper we develop a method for calculating a probability that a particular generalized variance is the smallest of all the K multivariate normal generalized variances. The method gives a way of comparing K multivariate populations in terms of their dispersion or spread, because the generalized variance is a scalar measure of the overall multivariate scatter. Fully parametric frequentist approach for the probability is intractable and thus a Bayesian method is pursued using a variant of weighted Monte Carlo (WMC) sampling based approach. Necessary theory involved in the method and computation is provided.

A Novel Statistical Feature Selection Approach for Text Categorization

  • Fattah, Mohamed Abdel
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1397-1409
    • /
    • 2017
  • For text categorization task, distinctive text features selection is important due to feature space high dimensionality. It is important to decrease the feature space dimension to decrease processing time and increase accuracy. In the current study, for text categorization task, we introduce a novel statistical feature selection approach. This approach measures the term distribution in all collection documents, the term distribution in a certain category and the term distribution in a certain class relative to other classes. The proposed method results show its superiority over the traditional feature selection methods.

A Robust Approach of Regression-Based Statistical Matching for Continuous Data

  • Sohn, Soon-Cheol;Jhun, Myoung-Shic
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.2
    • /
    • pp.331-339
    • /
    • 2012
  • Statistical matching is a methodology used to merge microdata from two (or more) files into a single matched file, the variants of which have been extensively studied. Among existing studies, we focused on Moriarity and Scheuren's (2001) method, which is a representative method of statistical matching for continuous data. We examined this method and proposed a revision to it by using a robust approach in the regression step of the procedure. We evaluated the efficiency of our revised method through simulation studies using both simulated and real data, which showed that the proposed method has distinct advantages over existing alternatives.

A sample size calibration approach for the p-value problem in huge samples

  • Park, Yousung;Jeon, Saebom;Kwon, Tae Yeon
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.5
    • /
    • pp.545-557
    • /
    • 2018
  • The inclusion of covariates in the model often affects not only the estimates of meaningful variables of interest but also its statistical significance. Such gap between statistical and subject-matter significance is a critical issue in huge sample studies. A popular huge sample study, the sample cohort data from Korean National Health Insurance Service, showed such gap of significance in the inference for the effect of obesity on cause of mortality, requiring careful consideration. In this regard, this paper proposes a sample size calibration method based on a Monte Carlo t (or z)-test approach without Monte Carlo simulation, and also proposes a test procedure for subject-matter significance using this calibration method in order to complement the deflated p-value in the huge sample size. Our calibration method shows no subject-matter significance of the obesity paradox regardless of race, sex, and age groups, unlike traditional statistical suggestions based on p-values.

Rough Set-Based Approach for Automatic Emotion Classification of Music

  • Baniya, Babu Kaji;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.400-416
    • /
    • 2017
  • Music emotion is an important component in the field of music information retrieval and computational musicology. This paper proposes an approach for automatic emotion classification, based on rough set (RS) theory. In the proposed approach, four different sets of music features are extracted, representing dynamics, rhythm, spectral, and harmony. From the features, five different statistical parameters are considered as attributes, including up to the $4^{th}$ order central moments of each feature, and covariance components of mutual ones. The large number of attributes is controlled by RS-based approach, in which superfluous features are removed, to obtain indispensable ones. In addition, RS-based approach makes it possible to visualize which attributes play a significant role in the generated rules, and also determine the strength of each rule for classification. The experiments have been performed to find out which audio features and which of the different statistical parameters derived from them are important for emotion classification. Also, the resulting indispensable attributes and the usefulness of covariance components have been discussed. The overall classification accuracy with all statistical parameters has recorded comparatively better than currently existing methods on a pair of datasets.